The Cylc Suite Engine
User Guide
5.1.1
GNU GPL v3.0 Software License
Copyright (C) 2008-2013 Hilary Oliver, NIWA

Hilary Oliver

March 5, 2013
PIC
PIC

Contents

1 Introduction: How Cylc Works
2 Cylc Screenshots
3 Required Software
4 Installation
5 On The Meaning Of Cycle Time In Cylc
6 Quick Start Guide
7 Suite Registration
8 Suite Definition
9 Task Implementation
10 Task Job Submission
11 Running Suites
12 Other Topics In Brief
13 Suite Discovery, Sharing, And Revision Control
14 Suite Design Principles
A Suite.rc Reference
B Site/User Config File Reference
C Command Reference
D The Cylc Lockserver
E The Suite Control GUI Graph View
F Cylc Project README File
G Cylc Project INSTALL File
H Cylc Development History
I Pyro
J GNU GENERAL PUBLIC LICENSE v3.0

1 Introduction: How Cylc Works

 1.1 Scheduling Forecast Suites
 1.2 EcoConnect
 1.3 Dependence Between Tasks
 1.4 The Cylc Scheduling Algorithm

1.1 Scheduling Forecast Suites

Environmental forecasting suites generate forecast products from a potentially large group of interdependent scientific models and associated data processing tasks. They are constrained by availability of external driving data: typically one or more tasks will wait on real time observations and/or model data from an external system, and these will drive other downstream tasks, and so on. The dependency diagram for a single forecast cycle in such a system is a Directed Acyclic Graph as shown in Figure 1 (in our terminology, a forecast cycle is comprised of all tasks with a common cycle time, which is the nominal analysis time or start time of the forecast models in the group). In real time operation processing will consist of a series of distinct forecast cycles that are each initiated, after a gap, by arrival of the new cycle’s external driving data.

From a job scheduling perspective task execution order in such a system must be carefully controlled in order to avoid dependency violations. Ideally, each task should be queued for execution at the instant its last prerequisite is satisfied; this is the best that can be done even if queued tasks are not able to execute immediately because of resource contention.

1.2 EcoConnect

Cylc was developed for the EcoConnect Forecasting System at NIWA (National Institute of Water and Atmospheric Research, New Zealand). EcoConnect takes real time atmospheric and stream flow observations, and operational global weather forecasts from the Met Office (UK), and uses these to drive global sea state and regional data assimilating weather models, which in turn drive regional sea state, storm surge, and catchment river models, plus tide prediction, and a large number of associated data collection, quality control, preprocessing, post-processing, product generation, and archiving tasks.1 The global sea state forecast runs once daily. The regional weather forecast runs four times daily but it supplies surface winds and pressure to several downstream models that run only twice daily, and precipitation accumulations to catchment river models that run on an hourly cycle assimilating real time stream flow observations and using the most recently available regional weather forecast. EcoConnect runs on heterogeneous distributed hardware, including a massively parallel supercomputer and several Linux servers.

1.3 Dependence Between Tasks

1.3.1 Intra-cycle Dependence

Most inter-task dependence exist within a single forecast cycle. Figure 1 shows the dependency diagram for a single forecast cycle of a simple example suite of three forecast models (a, b, and c) and three post processing or product generation tasks (d, e and f). A scheduler capable of handling this must manage, within a single forecast cycle, multiple parallel streams of execution that branch when one task generates output for several downstream tasks, and merge when one task takes input from several upstream tasks.


PIC


Figure 1: The dependency graph for a single forecast cycle of a simple example suite. Tasks a, b, and c represent forecast models, d, e and f are post processing or product generation tasks, and x represents external data that the upstream forecast model depends on.



PIC


Figure 2: The optimal job schedule for two consecutive cycles of our example suite during real time operation, assuming that all tasks trigger off upstream tasks finishing completely. The horizontal extent of a task bar represents its execution time, and the vertical blue lines show when the external driving data becomes available.


Figure 2 shows the optimal job schedule for two consecutive cycles of the example suite in real time operation, given execution times represented by the horizontal extent of the task bars. There is a time gap between cycles as the suite waits on new external driving data. Each task in the example suite happens to trigger off upstream tasks finishing, rather than off any intermediate output or event; this is merely a simplification that makes for clearer diagrams.


PIC


Figure 3: If the external driving data is available in advance, can we start running the next cycle early?



PIC


Figure 4: A naive attempt to overlap two consecutive cycles using the single-cycle dependency graph. The red shaded tasks will fail because of dependency violations (or will not be able to run because of upstream dependency violations).



PIC


Figure 5: The best that can be done in general when inter-cycle dependence is ignored.


Now the question arises, what happens if the external driving data for upcoming cycles is available in advance, as it would be after a significant delay in operations, or when running a historical case study? While the forecast model a appears to depend only on the external data x at this stage of the discussion, in fact it would typically also depend on its own previous instance for the model background state used in initializing the new forecast. Thus, as alluded to in Figure 3, task a could in principle start as soon as its predecessor has finished. Figure 4 shows, however, that starting a whole new cycle at this point is dangerous - it results in dependency violations in half of the tasks in the example suite. In fact the situation is even worse than this - imagine that task b in the first cycle is delayed for any reason after the second cycle has been launched? Clearly we must consider handling inter-cycle dependence explicitly or else agree not to start the next cycle early, as is illustrated in Figure 5.

1.3.2 Inter-cycle Dependence

Forecast models typically depend on their own most recent previous forecast for background state or restart files of some kind, and different types of tasks in different forecast cycles can also be linked (in an atmospheric forecast analysis suite, for instance, the weather model may also generate background states for use by the observation processing and data-assimilation systems in the next cycle). In real time operation this inter-cycle dependence can be ignored because it is automatically satisfied when each cycle finishes before the next one begins. If, on the other hand, it is explicitly accounted for, it complicates the dependency graph by destroying the clean boundary between forecast cycles. Figure 6 illustrates the problem for our simple example suite assuming the minimal inter-cycle dependence: the forecast models (a, b, and c) each depend on their own previous instances.

For this reason, and perhaps because we tend to see forecasting suites as inherently sequential (with respect to whole forecast cycles) other metaschedulers ignore inter-cycle dependence and therefore require a series of distinct cycles at all times. While this does not affect normal real time operation it can be a serious impediment when advance availability of external driving data makes it possible, in principle, to run some tasks from upcoming cycles before the current cycle is finished - as suggested at the end of the previous section. This occurs after delays (late arrival of external data, system maintenance, etc.) and, to an even greater extent, in historical case studies, and parallel test suites that are delayed with respect to the main operation. It is a serious problem, in particular, for suites that have little downtime between forecast cycles and therefore take many cycles to catch up after a delay. Without taking account of inter-cycle dependence, the best that can be done, in general, is to reduce the gap between cycles to zero as shown in Figure 5. A limited crude overlap of the single cycle job schedule may be possible for specific task sets but the allowable overlap may change if new tasks are added, and it is still dangerous: it amounts to running different parts of a dependent system as if they were not dependent and as such it cannot be guaranteed that some unforeseen delay in one cycle, after the next cycle has begun, (e.g. due to resource contention or task failures) won’t result in dependency violations.


PIC


Figure 6: The complete dependency graph for the example suite, assuming the least possible inter-cycle dependence: the forecast models (a, b, and c) depend on their own previous instances. The dashed arrows show connections to previous and subsequent forecast cycles.



PIC


Figure 7: The optimal two cycle job schedule when the next cycle’s driving data is available in advance, possible in principle when inter-cycle dependence is handled explicitly.


Figure 7 shows, in contrast to Figure 4, the optimal two cycle job schedule obtained by respecting all inter-cycle dependence. This assumes no delays due to resource contention or otherwise - i.e. every task runs as soon as it is ready to run. The scheduler running this suite must be able to adapt dynamically to external conditions that impact on multi-cycle scheduling in the presence of inter-cycle dependence or else, again, risk bringing the system down with dependency violations.


PIC


Figure 8: Job schedules for the example suite after a delay of almost one whole forecast cycle, when inter-cycle dependence is taken into account (above the time axis), and when it is not (below the time axis). The colored lines indicate the time that each cycle is delayed, and normal “caught up” cycles are shaded gray.



PIC


Figure 9: Job schedules for the example suite in case study mode, or after a long delay, when the external driving data are available many cycles in advance. Above the time axis is the optimal schedule obtained when the suite is constrained only by its true dependencies, as in Figure 3, and underneath is the best that can be done, in general, when inter-cycle dependence is ignored.


To further illustrate the potential benefits of proper inter-cycle dependency handling, Figure 8 shows an operational delay of almost one whole cycle in a suite with little downtime between cycles. Above the time axis is the optimal schedule that is possible in principle when inter-cycle dependence is taken into account, and below it is the only safe schedule possible in general when it is ignored. In the former case, even the cycle immediately after the delay is hardly affected, and subsequent cycles are all on time, whilst in the latter case it takes five full cycles to catch up to normal real time operation.

Similarly, Figure 9 shows example suite job schedules for an historical case study, or when catching up after a very long delay; i.e. when the external driving data are available many cycles in advance. Task a, which as the most upstream forecast model is likely to be a resource intensive atmosphere or ocean model, has no upstream dependence on co-temporal tasks and can therefore run continuously, regardless of how much downstream processing is yet to be completed in its own, or any previous, forecast cycle (actually, task a does depend on co-temporal task x which waits on the external driving data, but that returns immediately when the external data is available in advance, so the result stands). The other forecast models can also cycle continuously or with short gap between, and some post processing tasks, which have no previous-instance dependence, can run continuously or even overlap (e.g. e in this case). Thus, even for this very simple example suite, tasks from three or four different cycles can in principle run simultaneously at any given time. In fact, if our tasks are able to trigger off internal outputs of upstream tasks, rather than waiting on full completion, successive instances of the forecast models could overlap as well (because model restart outputs are generally completed early in the forecast) for an even more efficient job schedule.

1.4 The Cylc Scheduling Algorithm


PIC


Figure 10: How cylc sees a suite, in contrast to the multi-cycle dependency graph of Figure 6. Task colors represent different cycle times, and the small squares and circles represent different prerequisites and outputs. A task can run when its prerequisites are satisfied by the outputs of other tasks in the pool.


Cylc manages a pool of proxy objects that represent real tasks in the forecasting suite. A task proxy can run the real task that it represents when its prerequisites are satisfied, and can receive reports of completed outputs from the real task as it runs. There is no global cycling mechanism to advance the suite in time; instead each individual task proxy has a private cycle time and spawns its own successor. Task proxies are self-contained - they just know their own prerequisites and outputs and are not aware of the wider suite context. Inter-cycle dependencies are not treated as special, and the task pool can be populated with tasks from many different cycle times. The cylc task pool is illustrated in Figure 10. Now, whenever any task changes state due to completion of an output, every task checks to see if its own prerequisites are now satisfied.2 Moreover, this matching of prerequisites and outputs involves the entire task pool, regardless of individual cycle times, so that inter- and intra-cycle dependence is handled with ease.

Thus without using global cycling mechanisms, and treating all inter-task dependence equally, cylc in effect gets a pool of tasks to self-organize by negotiating their own dependencies so that optimal scheduling, as described in the previous section, emerges naturally at run time.

2 Cylc Screenshots


PIC


Figure 11: The cylc dbviewer GUI, showing one suite running on port 7766.



PIC]


Figure 12: A cylc suite definition in the vim editor.



PIC


Figure 13: gcylc dot and text views.



PIC


Figure 14: gcylc graph and text views.



PIC


Figure 15: A large suite graphed by cylc.


3 Required Software

 3.1 Known Version Compatibility Issues
 3.2 Other Software Used Internally By Cylc

The following packages are technically optional as you can construct and control cylc suites without dependency graphing, the cylc GUIs, and template processing but this is not recommended, and without Jinja2 you will not be able to run many of the example suites:

If installing via a Linux package manager, you may also need a couple of devel packages for the pygraphviz build:

Since cylc-5.0 any tagged version of cylc can be downloaded and installed for use, but you have to generate the user guide yourself from the LaTeX source (by running make). For this purpose, the following packages are also required:

And for HTML versions of the User Guide:

Finally, cylc makes heavy use of “ordered dictionary” data structures, and a significant speedup in parsing large suites can be had by installing the fast C-coded ordereddict module by Anthon van der Neut:

This module is currently included with cylc under $CYLC_DIR/ext, and is built by the top level cylc Makefile. If you install the resulting library appropriately cylc will automatically use it in place of a slower Python implementation of the ordered dictionary structure.

3.1 Known Version Compatibility Issues

Cylc should run “out of the box” on recent Linux distributions.

For distributed suites the Pyro versions installed on all suite or task hosts must be mutually compatible. Using identical Pyro versions guarantees compatibility but may not be strictly necessary because cylc uses Pyro rather minimally.

Recent versions of Pyro require Python 2.5 or greater, due to use of the with statement introduced in 2.5.

3.1.1 Pyro 3.9 and Earlier

Beware of Linux distributions that come packaged with old Pyro versions. Pyro 3.9 and earlier is not compatible with the new-style Python classes used in cylc. It has been reported that Ubuntu 10.04 (Lucid Lynx), released in September 2009, suffers from this problem. Surprisingly, so does Ubuntu 11.10 (Oneiric Ocelot), released in October 2011 - and therefore, presumably, earlier Ubuntu releases. Attempting to run a suite with Pyro 3.9 or earlier installed results in the following Python traceback:

Traceback (most recent call last): 
 File "/home/oliverh/cylc/bin/_run", line 232, in <module> 
 server = start() 
 File "/home/oliverh/cylc/bin/_run", line 92, in __init__ 
 scheduler.__init__( self ) 
 File "/home/oliverh/cylc/lib/cylc/scheduler.py", line 141, in 
__init__ 
 self.load_tasks() 
 File "/home/oliverh/cylc/bin/_run", line 141, in load_tasks_cold 
 itask = self.config.get_task_proxy( name, tag, 'waiting', 
stopctime=None, startup=True ) 
 File "/home/oliverh/cylc/lib/cylc/config.py", line 1252, in 
get_task_proxy 
 return self.taskdefs[name].get_task_class()( ctime, state, 
stopctime, startup ) 
 File "/home/oliverh/cylc/lib/cylc/taskdef.py", line 453, in 
tclass_init 
 print '-', sself.__class__.__name__, sself.__class__.__bases_ 
AttributeError: type object 'A' has no attribute '_taskdef__bases_' 
_run --debug testsuite.1322742021 2010010106 failed: 1

3.1.2 Apple Mac OSX

It has been reported that cylc runs fine on OSX 10.6 SnowLeopard, but on OSX 10.7 Lion there is an issue with constructing proper FQDNs (Fully Qualified Domain Names) that requires a change to the DNS service. Here’s how to solve the problem:

3.2 Other Software Used Internally By Cylc

Cylc has absorbed the following in modified form (no need to install these separately):

4 Installation

 4.1 Install The External Dependencies
 4.2 Install The Cylc Release
 4.3 Site And User Configuration Files
 4.4 Import The Example Suites
 4.5 Automated Database Test
 4.6 Automated Scheduler Tests
 4.7 Complete Non System-Level Installation
 4.8 What Next?
 4.9 Upgrading To New Cylc Versions

4.1 Install The External Dependencies

First install Pyro, graphviz, Pygraphviz, Jinja2, TeX, and ImageMagick using the package manager on your system if possible; otherwise download the packages manually and follow their native installation documentation. On a modern Linux system, this is very easy. For example, to install cylc-5.1.0 on the Fedora 18 Linux distribution:

% yum install graphviz       # (2.28) 
% yum install graphviz-devel # (for pgraphviz build) 
% yum install python-devel   # (ditto) 
 
# TeX packages, and ImageMagick, for generating the Cylc User Guide: 
% yum install texlive 
% yum install texlive-tex4ht 
% yum install texlive-tocloft 
% yum install texlive-framed 
% yum install texlive-preprint 
% yum install ImageMagick 
 
# Python packages: 
% easy_install pyro   # (3.16) 
% easy_install Jinja2 # (2.6) 
% easy_install pygraphviz 
 
# (sqlite 3.7.13 already installed on the system)

If you do not have root access on your intended cylc host machine and cannot get a sysadmin to do this at system level, see Section 4.7 for some tips on installing everything to a local user account.

4.2 Install The Cylc Release

Cylc typically installs into a normal user account; just unpack the release tarball in the desired location (referred to below as $CYLC_DIR) and see the $CYLC_DIR/INSTALL file for instructions (see also Section G.

4.3 Site And User Configuration Files

Cylc uses site and user configuration files to set some important global parameters such as the range of network ports, and the editor to use on suite definitions,

$CYLC_DIR/conf/site/site.rc  # site config file for global settings 
$HOME/.cylc/user.rc          # user config file for global settings

These files can be auto-generated with all settings intially commented out by running this command:

% cylc get-global-config --write-site/--write-user

The content of these config files, in terms of legal items and default values, is defined by the ConfigObj configspec file,

$CYLC_DIR/conf/site/cfgspec

The site config file should be adapted to set sensible defaults for all users when cylc is installed. Users can then override most settings in their own user config file, if necessary. Some settings can not be overridden by users (this is determined by #SITE ONLY comments in the configspec file, which are passed through to the config files).

4.4 Import The Example Suites

Run the following command immediately after installation to copy the cylc example suites to a given destination directory, and registering each one for use:

% cylc admin import-examples TOPDIR

Where TOPDIR is the top level directory into which the example suite definitions will be copied. To view the content of the resulting suite database, run cylc db print command or its GUI counterpart cylc db dbviewer:

% cylc db print --tree -x 'Auto|Quick' 
cylc-4-5-1-1344571037 
  |-AutoCleanup 
  | |-FamilyFailHook  family failure hook script example 
  | ‘-FamilyFailTask  family failure cleanup task example 
  |-AutoRecover 
  | |-async           asynchronous automated failure recovery example 
  | ‘-cycling         cycling automated failure recovery example 
  ‘-QuickStart 
    |-a          Quick Start Example A 
    |-b          Quick Start Example B 
    |-c          Quick Start Example C 
    ‘-z          Quick Start Example Z

Note that the dots in the cylc release version number are replaced with hyphens because ‘.’ is the registration name delimiter. Type cylc db print --help to see what the command options mean. Note also that the Unix “seconds since epoch” string is appended to the top level of the suite name hierarchy in order ensure uniqueness if the example suites are imported multiple times at the same cylc version: cylc-4-5-1-1344571037. You can re-register the example suites to get rid of this and make the names easier to type:

% cylc rereg cylc-4-5-1-1344571037 examples 
 
% cylc db print --tree -x 'Auto|Quick' 
examples 
 |-AutoRecover 
 | |-CleanupTask  No title provided 
 | |-EventHook     family failure task event hook example 
 | ‘-suicide       automated failure recovery example 
 ‘-QuickStart 
   |-a             Quick Start Example A 
   |-b             Quick Start Example B 
   |-c             Quick Start Example C 
   ‘-z             Quick Start Example Z

Here’s the same suite database listing in flat form:

% cylc db print -y 'Auto|Quick' 
examples.QuickStart.c | /tmp/oliverh/examples/QuickStart/c 
examples.QuickStart.z | /tmp/oliverh/examples/QuickStart/z 
examples.QuickStart.a | /tmp/oliverh/examples/QuickStart/a 
examples.QuickStart.b | /tmp/oliverh/examples/QuickStart/b 
examples.AutoRecover.CleanupTask | /tmp/oliverh/examples/AutoRecover/CleanupTask 
examples.AutoRecover.EventHook | /tmp/oliverh/examples/AutoRecover/EventHook 
examples.AutoRecover.suicide | /tmp/oliverh/examples/AutoRecover/suicide

4.5 Automated Database Test

The command cylc admin test-db gives suite registration database functionality a work out - it copies one of the cylc example suites, registers it under a new name and then manipulates it by recopying the suite in various ways, and so on, before finally deleting the test registrations. This should complete without error in a few seconds.

4.6 Automated Scheduler Tests

Cylc has a battery of self-diagnosing test suites for pre-release testing that you can also run after installation to check that everything works properly.

The command cylc admin test-battery copies, registers, and runs, the official cylc test suites, and reports the results. Some tests may take several minutes to complete. Pre-test requirements:

If these requirements are satisfied no tests should fail on an official cylc release, because of pre-release testing. If you find that some do fail, consider reporting this to cylc’s maintainer or the cylc mailing list. A possible exception to this is that some suite timeouts may be set too low if your suite host is slow or overloaded. Suite timeouts are intended to catch error conditions, such as unexpected task failures, that prevent a suite from completing and shutting down automatically.

4.7 Complete Non System-Level Installation

If you do not have root access to your host machine and cannot easily get Pyro, graphviz, Pygraphviz, and Jinja2 installed at system level, here’s how to install everything under your home directory.

First, cylc is already designed to be installed into a normal user account - just unpack the release tarball into $CYLC_DIR. If you invoke cylc commands at this stage you will get a warning that Pyro is not installed.

Next, create a new sub-directory in the cylc source tree, $CYLC_DIR/external, and download the Pyro, Graphviz, and Pygraphviz source distributions to it (the URLs are given at the beginning of Section 3).

4.7.1 Pyro

Install Pyro under $HOME/external/installed as follows:

% cd $HOME/external 
% tar xzf Pyro-3.14.tar.gz 
% cd Pyro-3.14 
% python setup.py install --prefix=$HOME/external/installed

Take note of the resulting Python site-packages directory under external/installed/, e.g.:

$HOME/external/installed/lib64/python2.6/site-packages/

The exact path will depend on your local Python environment. Add the following to your login scripts:

# .profile 
PYTHONPATH=$HOME/external/installed/lib64/python2.6/site-packages:$PYTHONPATH 
PATH=$HOME/external/installed/bin:$PATH

Now you should be able to get cylc to print its release version:

% . $HOME/.profile   # (or log in again) 
% cylc -v 
x.y.z

If this command aborts and says that Pyro is not installed or is not available, then you have either not installed Pyro (check the output of the installation command carefully) or you have not pointed to the installed Pyro modules in your PYTHONPATH, or you have not sourced the cylc environment since updating PYTHONPATH.

Note that Pyro can also be installed with easy_install, which downloads and installs python packages in one shot.

At this point you should have access to all cylc functionality except for suite graphing and the gcylc graph view. For example

4.7.2 Graphviz

Install Graphviz under $CYLC_DIR/external/installed as follows:

% cd $CYLC_DIR/external 
% tar xzf graphviz-2.28.0.tar.gz 
% cd graphviz-2.28.0 
% ./configure --prefix=$CYLC_DIR/external/installed --with-qt=no 
% make 
% make install

This installs graphviz files into the bin, include, and lib sub-directories of your local installation directory. The graphviz lib and include locations are required when installing Pygraphviz (next).

Note that the graphviz build, reportedly, may fail on systems that do not have QT installed, hence the ./configure --with-qt=no option above.

4.7.3 Pygraphviz

Install Pygraphviz under $CYLC_DIR/external/installed as follows:

% cd $CYLC_DIR/external 
% tar xzf pygraphviz-1.1.tar.gz 
% cd pygraphviz-1.1

Now edit setup.py lines 31 and 32 to specify the graphviz lib and include directories:

library_path=os.environ['CYLC_DIR'] + '/external/installed/lib' 
include_path=os.environ['CYLC_DIR'] + '/external/installed/include/graphviz'

Or you can just specify the absolute paths if you like, instead of using the $CYLC_DIR environment variable. Check that these are the correct library and include paths by inspecting the contents of the specified directories, and adjust them if necessary. Finally, install pygraphviz:

% export CYLC_DIR=/path/to/cylc 
% python setup.py install --prefix=$CYLC_DIR/external/installed

This may or may not, depending on your local Python setup, install the Pygraphviz modules into the same place as the Pyro modules, e.g.:

% ls $CYLC_DIR/external/installed/lib64/python2.6/site-packages/ 
 pygraphviz  pygraphviz-1.1-py2.6.egg-info  Pyro  Pyro-3.14-py2.6.egg-info

If not, add the correct Pyraphviz installation path to your PYTHONPATH.

The easiest way to check that pygraphviz has been installed properly is to start an interactive Python session (type python after sourcing the cylc environment to configure your PYTHONPATH) then type import pygraphviz at the interpreter prompt. If this results in an error message ImportError: No module named pygraphviz then either you have not installed pygraphviz properly, or you have not configured your PYTHONPATH to point to the installed pygraphviz modules, or you have not sourced the cylc environment since updating PYTHONPATH. Finally, if you have installed pygraphviz and configured your PYTHONPATH but graphviz itself has not been installed properly (or if the graphviz libraries have been deleted since you installed pygraphviz) then the initial pygraphiz import will be successful but a lower level import will fail when the pygraphviz modules cannot load the underlying graphviz libraries - in that case, reinstall graphviz.

4.7.4 Jinja2

You can download Jinja2 from the project web site and install it with:

python setup.py install --prefix=/path/to/install/location

or use the easy_install command to do it all in one step. Either way the final installed package location must be present in the PYTHONPATH variable, and you may have to arrange for this first. You may also need to create the installed package directory if it doesn’t exist already (if so the install will abort and print the name of the missing directory in the error message). Here’s how to easy_install Jinja2 into your new private python site packages directory:

% LOCALPREFIX=$CYLC_DIR/external/installed 
% LOCALPACKAGES=$CYLC_DIR/external/installed/lib64/python2.6/site-packages 
% export PYTHONPATH=$LOCALPACKAGES:$PYTHONPATH 
% easy_install --prefix=$LOCALPREFIX Jinja2

Adapt the site-packages path according to your actual path, as above.

4.8 What Next?

You should now have access to all cylc functionality. Import the example suites if you have not done so already (Section 4.4) then test your cylc installation by running the automated suite database test (Section 4.5) and the automated scheduler test (Section 4.6), then go on to the Quick Start Guide (Section 6).

4.9 Upgrading To New Cylc Versions

Upgrading is just a matter of unpacking the new cylc release and optionally re-importing the example suites for the new version.

5 On The Meaning Of Cycle Time In Cylc

From using other schedulers you may be accustomed to the idea that a forecasting suite has a “current cycle time”, which is typically the analysis time or nominal start time of the main forecast model(s) in the suite, and that the whole suite advances to the next forecast cycle when all tasks in the current cycle have finished (or even when a particular wall clock time is reached, in real time operation). As is explained in the Introduction, this is not how cylc works.

Cylc suites advance by means of individual tasks with private cycle times independently spawning successors at the next valid cycle time for the task, not by incrementing a suite-wide forecast cycle. Each task will be submitted when its own prerequisites are satisfied, regardless of other tasks with other cycle times running, or not, at the time. It may still be convenient at times, however, to refer to the “current cycle”, the “previous cycle”, or the “next cycle” and so forth, with reference to a particular task, or in the sense of all tasks that “belong to” a particular forecast cycle. But keep in mind that the members of these groups may not be present simultaneously in the running suite - i.e. different tasks may pass through the “current cycle” (etc.) at different times as the suite evolves, particularly in delayed (catch up) operation.

6 Quick Start Guide

 6.1 View The examples.QuickStart.a Suite Definition
 6.2 Plotting examples.QuickStart.a
 6.3 Run The examples.QuickStart.a Suite
 6.4 examples.QuickStart.b - Handling Cold-Starts Properly
 6.5 examples.QuickStart.c - Real Task Implementations
 6.6 Monitoring Running Suites
 6.7 Searching A Suite
 6.8 Comparing Suites
 6.9 Validating A Suite
 6.10 Other Example Suites

This section works through some basic cylc functionality using the “QuickStart” example suites, which you can import to your suite database by running the cylc admin import-examples command and then reregistering the top level suite name to “examples” as described in Section 4.4. You should end up with the following QuickStart suites (but the directory paths on the right are up to you):

% cylc db print --tree QuickStart 
 examples 
   ‘-QuickStart 
     |-a        Quick Start Example A | /tmp/oliverh/examples/QuickStart/a 
     |-b        Quick Start Example B | /tmp/oliverh/examples/QuickStart/b 
     |-c        Quick Start Example C | /tmp/oliverh/examples/QuickStart/c 
     ‘-z        Quick Start Example Z | /tmp/oliverh/examples/QuickStart/z

6.1 View The examples.QuickStart.a Suite Definition

Cylc suites are defined by suite.rc files, discussed at length in Suite Definition (Section 8) and the Suite.rc Reference (Appendix A). To view the examples.QuickStart.a suite definition right-click on the suite name and choose ‘Edit’; or use the edit command:

% cylc edit examples.QuickStart.a

This opens the suite definition in your editor (configured in the cylc site or user config file - see Section 4.3) from the suite definition directory so that you can easily open other suite files in the editor. You can of course do this manually, but by using the cylc interface you don’t have to remember suite directory locations. If you do need to move to a suite definition directory, you can do this:

% cd $( cylc db get-dir examples.QuickStart.a )

Suites that use include-files can optionally be edited in a temporarily inlined state - the inlined file gets split back into its constituent include-files when you save it and exit the editor. While editing, the inlined file becomes the official suite definition so that changes take effect whenever you save the file.

Anyhow, you should now see the following suite.rc file in your editor:

 
title = "Quick Start Example A" 
description = "(see the Cylc User Guide)" 
 
[scheduling] 
    initial cycle time = 2011010106 
    final cycle time = 2011010200 
    runahead limit = 12 
    [[special tasks]] 
        start-up        = Prep 
        clock-triggered = GetData(1) 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph  = """Prep => GetData => Model => PostA 
                        Model[T-6] => Model""" 
        [[[6,18]]] 
            graph = "Model => PostB" 
 
[visualization] # optional 
    [[node groups]] 
        post = PostA, PostB 
    [[node attributes]] 
        post  = "style=unfilled", "color=blue", "shape=rectangle" 
        PostB = "style=filled", "fillcolor=seagreen2" 
        Model  = "style=filled", "fillcolor=red" 
        GetData = "style=filled", "fillcolor=yellow3", "shape=septagon" 
        Prep = "shape=box", "style=bold", "color=red3"

Cylc comes with syntax highlighting and section folding for the vim editor, and an emacs font-lock mode - see Section 8.2.3.

This defines a complete, valid, runnable suite. Here’s how to interpret it: At 0, 6, 12, and 18 hours each day a clock-triggered task called GetData triggers 1 hour after the wall clock reaches its (GetData’s) nominal cycle time; then a task called Model triggers when GetData finishes; and a task called PostA triggers when Model is finished. Additionally, Model depends on its own previous instance from 6 hours earlier; and twice per day at 6 and 18 hours another task called PostB also triggers off Model.

All the tasks in this suite can run in parallel with their own previous instances if the opportunity arises (i.e. if their prerequisites are satisfied before the previous instance is finished). Most tasks should be capable of this (see Section 14.4) but if necessary you can force particular tasks to run sequentially like this:

# SUITE.RC 
[scheduling] 
    [[special tasks]] 
        sequential = GetData, PostB

Finally, when the suite is cold-started (started from scratch) it is made to wait on a special synchronous start-up task called Prep. Start-up tasks are one-off (non-spawning) tasks that are only used at suite start-up, and any dependence on them only applies at suite start-up. They cannot be used in conditional trigger expressions with normal cycling tasks, because the trigger becomes undefined in subsequent cycles. Start-up tasks are synchronous because they have a defined cycle time even though they are not cycling tasks. Cylc also has asynchronous one-off tasks, which have no cycle time:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        graph = "prep"     # an asynchronous one-off task (no cycle time) 
        [[[ 0,6,12,18 ]]] 
            graph = "prep => foo => bar"   # followed by cycling tasks

The optional visualization section configures graph plotting.

6.2 Plotting The examples.QuickStart.a Dependency Graph

Right-click on the examples.QuickStart.a suite in the db viewer and choose Graph; or by command line,

% cylc graph examples.QuickStart.a 2011052300 2011052318 &

This will pop up a zoomable, pannable, graph viewer showing the graph of Figure 16. If you edit the suite.rc file the viewer will update in real time whenever you save the file.


PIC


Figure 16: The QuickStart.a dependency graph, plotted by cylc.


6.3 Run The examples.QuickStart.a Suite

Each cylc task defines command scripting to invoke the right external processing when the task is ready to run. This has not been explicilty configured in the example suite, so it defaults, for all tasks, to the dummy task scripting inherited from the root namespace (see Section 8):

% cylc get-config -i [runtime][GetData]'command scripting' examples.QuickStart.a 
['echo Dummy command scripting; sleep 10']

The command arguments above reflect suite definition section nesting.

Now start a suite control GUI by right-clicking on the suite in the db viewer and choosing Control Graph View. You can also open other control GUIs for the same suite if you like. Multiple GUIs running at the same time will automatically connect to the same running suite (they won’t try to run separate instances). Note also that if you shut down a suite control GUI, the suite will keep running. You can reconnect to it later by opening another control GUI.

In the control GUI click on Control Run, enter an initial cold-start cycle time (e.g. 2011052306), and select “Hold (pause) on start-up” so that the suite will start in the held state (tasks will not be submitted even if they are ready to run).

Do not choose an initial cycle time in the future unless you’re running in simulation mode, or nothing much will happen until that time.

If the initial cycle time ends in 06 or 18 the suite controller should look like Figure 17, or otherwise (00 or 12) like Figure 18.


PIC


Figure 17: Suite examples.QuickStart:one at start-up with an initial cycle time ending in 06 or 18 hours. Yellow nodes represent waiting tasks in the held state.



PIC


Figure 18: Suite examples.QuickStart.a at start-up with an initial cycle time ending in 00 or 12 hours. Yellow nodes represent waiting tasks in the held state and greyed out nodes are tasks from the base graph, defined in the suite.rc file, that aren’t currently live in the suite.


The reason for the difference in graph structure between the two figures is this: cylc starts up with every task present in the waiting state (blue) at the initial cycle time or at the first subsequent valid cycle time for the task - and PostB does not run at 00 or 12. The greyed out tasks are from the base graph, defined in the suite.rc file, and aren’t actually present in the suite as yet (they are shown in the graph in order to put the live tasks in context).

Now, click on Control Release in the suite control GUI to release the hold on the suite, and observe what happens: the GetData tasks will rapidly go off in parallel out to a few cycles ahead (how far ahead depends on the suite runahead limit as explained below and in The Suite Runahead Limit, Section 11.6.1) and then the suite will stall, as shown in Figures 19 and 20.


PIC


Figure 19: Suite examples.QuickStart.a running, showing several consecutive instances of the clock-triggered GetData task running at once, out to the suite runahead limit of 12 hours.



PIC


Figure 20: Suite examples.QuickStart.a stalled after the clock-triggered GetData tasks have finished because of Model’s previous-cycle dependence and the suite runahead limit.


The Prep task runs immediately because it has no prerequisites and is not clock-triggered. The clock-triggered GetData tasks then all go off at once because they have no prerequisites (i.e. they do not have to wait on any upstream tasks), their trigger time has long passed (the initial cycle time was in the past), and they are not sequential tasks (so they are able to run in parallel - try declaring GetData sequential to see the difference). Beyond the suite runahead limit which is set to 12 hours in this suite - see Section 11.6.1) GetData is put into a special ‘runahead’ held state indicated by the darker blue graph node. The task will be released from this state when the slower tasks have caught up sufficiently. The runahead limit is designed to stop quick tasks from running off too far into the future. It is typically of little consequence in real time operation when suites are typically constrained by clock triggered tasks.

6.3.1 Viewing The State Of Tasks

If you’re wondering why a particular task has not triggered yet in a running suite you can view the current state of its prerequisites by right-clicking on the task and choosing ‘View State’, or using cylc show. Do this for the first Model task, which appears to be stuck in the waiting state; it will pop up a small window as in Figure 21.


PIC


Figure 21: Viewing current task state after right-clicking on a task in gcylc. The same information is available from the cylc show command.


It is clear that the reason the task is not running, and consequently, by virtue of the runahead limit, why the suite has stalled, is that Model[T] is waiting on Model[T-6] which does not exist at suite start-up. Model represents a warm-cycled forecast model that depends on a model background state or restart file(s) generated by its own previous run.

6.3.2 Triggering Tasks Manually

Right-click on the waiting Model task and choose Trigger, or use cylc trigger, to force the task to trigger and thereby get the suite up and running. In a real suite this would not be sufficient: the real forecast model that Model represents would fail for lack of the real restart files that it requires as input. We’ll see how to handle this properly shortly.

6.3.3 Suite Shut-Down And Restart

After watching the examples.QuickStart.a suite run for a while choose Stop from the Control menu, or cylc stop, to shut it down. The default stop method waits for any tasks that are currently running to finish before shutting the suite down, so that the final recorded suite state is perfectly consistent with what actually happened.

You can restart the suite from where it left off by choosing Control Run and selecting the ‘restart’ option, or using cylc restart. Note that cylc always writes a special state dump, and logs its name, prior to actioning any intervention, and you can also restart a suite from one of these states, rather than the default most recent state.

6.4 examples.QuickStart.b - Handling Cold-Starts Properly

Now take a look at examples.QuickStart.b, which is a minor modification of examples.QuickStart.a. Its suite.rc file has a new cold-start task called ColdModel,

# SUITE.RC 
[scheduling] 
    [[special tasks]] 
        cold-start = ColdModel

and the dependency graph (see also Figure 22) looks like this:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[ 0,6,12,18 ]]] 
            graph  = """Prep => GetData & ColdModel 
                        GetData => Model => PostA 
                        ColdModel | Model[T-6] => Model""" 
        [[[ 6,18 ]]] 
            graph = "Model => PostB"

In other words, Model[T] can trigger off either Model[T-6] or ColdModel[T].


PIC


Figure 22: The examples.QuickStart.b dependency graph showing a model cold start task.


Cold-start tasks are one-off tasks used in the first cycle to satisfy another task’s intercycle-cycle dependence at suite start-up (when there is no previous cycle to do it). For instance, a series of cold-start tasks may be used to cold-start a warm-cycled model. Unlike start-up tasks though, cold-start dependence is preserved in subsequent cycles, so they must generally appear in OR’d conditional triggers in order to avoid stalling the suite after the first cycle (as in this example). This means cold-start tasks can be inserted into a running suite, if necessary, to cold-start their associated tasks in case of problems that prevent continued normal warm cycling.

A cold-start task in a real suite may submit a real “cold start forecast”, or similar, to generate the previous-cycle input files required by the associated model, or it may just stand in for some external spinup process, or similar, that has to be completed before the suite is started (in the latter case the cold-start task would be a dummy task that just reports successful completion in order to satisfy the initial previous-cycle dependence of the model).

Run examples.QuickStart.b to confirm that that no manual triggering is required to get the suite started now.

6.5 examples.QuickStart.c - Real Task Implementations

The suite examples.QuickStart.c is the same as examples.QuickStart.b except that it has real task implementations (scripts located in the suite bin directory) that generate and consume files in such a way that they have to run according to the graph of Figure 22. The suite gets them to run together out of a common I/O workspace, configured via the suite.rc file.

By studying this suite and its tasks, and by making quick copies of it to modify and run, you should be able to learn a lot about how to build real cylc suites. Here’s the complete suite definition

 
title = "Quick Start Example C" 
description  = "(Quick Start b plus real tasks)" 
 
# A clock-triggered data-gathering task, a warm-cycled model, and two 
# post-processing tasks (one runs every second cycle). The tasks are not 
# cylc-aware, have independently configured I/O directories, and abort 
# if their input files do not exist. This suite gets them all to run out 
# of a common I/O workspace (although the warm-cycled model uses a 
# private running directory for its restart files). 
 
[scheduling] 
    initial cycle time = 2011010106 
    final cycle time = 2011010200 
    [[special tasks]] 
        start-up        = Prep 
        cold-start      = ColdModel 
        clock-triggered = GetData(1) 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph  = """Prep => GetData & ColdModel 
                        GetData => Model => PostA 
                        ColdModel | Model[T-6] => Model""" 
        [[[6,18]]] 
            graph = "Model => PostB" 
 
[runtime] 
    [[root]] 
        [[[environment]]] 
            TASK_EXE_SECONDS = 5 
            WORKSPACE = /tmp/$USER/$CYLC_SUITE_REG_NAME/common 
 
    [[Prep]] 
        description = "prepare the suite workspace for a new run" 
        command scripting = clean-workspace.sh $WORKSPACE 
 
    [[GetData]] 
        description = "retrieve data for the current cycle time" 
        command scripting = GetData.sh 
        [[[environment]]] 
            GETDATA_OUTPUT_DIR = $WORKSPACE 
 
    [[Models]] 
        [[[environment]]] 
            MODEL_INPUT_DIR = $WORKSPACE 
            MODEL_OUTPUT_DIR = $WORKSPACE 
            MODEL_RUNNING_DIR = $WORKSPACE/Model 
    [[ColdModel]] 
        inherit = Models 
        description = "cold start the forecast model" 
        command scripting = Model.sh --coldstart 
    [[Model]] 
        inherit = Models 
        description = "the forecast model" 
        command scripting = Model.sh 
 
    [[Post]] 
        description = "post processing for model" 
        [[[environment]]] 
            INPUT_DIR  = $WORKSPACE 
            OUTPUT_DIR = $WORKSPACE 
    [[PostA,PostB]] 
        inherit = Post 
        command scripting = ${CYLC_TASK_NAME}.sh 
 
[visualization] 
    default node attributes = "shape=ellipse" 
    [[node attributes]] 
        Post  = "style=unfilled", "color=blue", "shape=rectangle" 
        PostB = "style=filled", "fillcolor=seagreen2" 
        Models  = "style=filled", "fillcolor=red" 
        ColdModel = "fillcolor=lightblue" 
        GetData = "style=filled", "fillcolor=yellow", "shape=septagon" 
        Prep = "shape=box", "style=bold", "color=red3"

Here’s the namespace hierarchy defined by this suite:

% cylc list --tree examples.QuickStart.c 
root 
 |-GetData     retrieve data for the current cycle time 
 |-Models 
 | |-ColdModel cold start the forecast model 
 | ‘-Model     the forecast model 
 |-Post 
 | |-PostA     post processing for model 
 | ‘-PostB     post processing for model 
 ‘-Prep        prepare the suite workspace for a new run

And here, for example, is the complete implementation for the PostA task (located with the other task scripts in the suite bin directory):

 
#!/bin/bash 
 
set -e 
 
cylc checkvars  TASK_EXE_SECONDS 
cylc checkvars -d INPUT_DIR 
cylc checkvars -c OUTPUT_DIR 
 
# CHECK INPUT FILES EXIST 
PRE=$INPUT_DIR/surface-winds-${CYLC_TASK_CYCLE_TIME}.nc 
if [[ ! -f $PRE ]]; then 
    echo "ERROR, file not found $PRE" >&2 
    exit 1 
fi 
 
echo "Hello from $CYLC_TASK_NAME at $CYLC_TASK_CYCLE_TIME in $CYLC_SUITE_REG_NAME" 
 
sleep $TASK_EXE_SECONDS 
 
# generate outputs 
touch $OUTPUT_DIR/surface-wind.products

6.6 Monitoring Running Suites

6.6.1 Suite stdout and stderr

Cylc writes some information, including warnings and errors, to the suite stdout stream. In debug mode (cylc run --debug) this output is directed to the terminal; otherwise it is directed to a log file under the suite run directory.

6.6.2 Suite Logs

The suite event log records timestamped events as the suite runs; it is stored under the suite run directory.


PIC


Figure 23: A cylc suite log.


Figure 23 shows a suite log viewed from gcylc. The cylc log command also prints the suite event, stdout, and stderr logs, with optionally filtering of the event log for specific tasks.

6.6.3 Task stdout and stderr Logs

The stdout and stderr streams from running tasks (if they do not detach from the process that does the initial job submission, or manage their own output) are written to a suite-specific sub-directory of the suite run directory. The location of this directory is determined by site/user config files, defaulting to $HOME/cylc-run/$CYLC_SUITE_REG_NAME/log/job/ (where $CYLC_SUITE_REG_NAME is the registered suite name).

6.7 Searching A Suite

The cylc suite search tool reports matches in the suite.rc file by line number, suite section, and file, even if include-files are used (and even if they are nested), and by file and line number for matches in the suite bin directory. The following output listing is from a search of the examples.QuickStart.c suite.

% cylc grep OUTPUT_DIR examples.QuickStart.c
SUITE: examples.QuickStart.c /tmp/oliverh/QuickStart/c/suite.rc 
PATTERN: OUTPUT_DIR 
 
FILE: /tmp/oliverh/QuickStart/c/suite.rc 
   SECTION: [runtime]->[[GetData]]->[[[environment]]] 
      (40):             GETDATA_OUTPUT_DIR = $WORKSPACE 
   SECTION: [runtime]->[[Models]]->[[[environment]]] 
      (45):             MODEL_OUTPUT_DIR = $WORKSPACE 
   SECTION: [runtime]->[[Post]]->[[[environment]]] 
      (60):             OUTPUT_DIR = $WORKSPACE 
 
FILE: /tmp/oliverh/QuickStart/c/bin/PostB.sh 
   (7): cylc checkvars -c OUTPUT_DIR 
   (21): touch $OUTPUT_DIR/precip.products 
 
FILE: /tmp/oliverh/QuickStart/c/bin/Model.sh 
   (11): cylc checkvars -c MODEL_OUTPUT_DIR MODEL_RUNNING_DIR 
   (54): touch $MODEL_OUTPUT_DIR/surface-winds-${CYLC_TASK_CYCLE_TIME}.nc 
   (55): touch $MODEL_OUTPUT_DIR/precipitation-${CYLC_TASK_CYCLE_TIME}.nc 
 
FILE: /tmp/oliverh/QuickStart/c/bin/PostA.sh 
   (7): cylc checkvars -c OUTPUT_DIR 
   (21): touch $OUTPUT_DIR/surface-wind.products 
 
FILE: /tmp/oliverh/QuickStart/c/bin/GetData.sh 
   (6): cylc checkvars -c GETDATA_OUTPUT_DIR 
   (11): touch $GETDATA_OUTPUT_DIR/obs-${CYLC_TASK_CYCLE_TIME}.nc

(Suite search is also available from the db viewer right-click menu).

6.8 Comparing Suites

The cylc diff command compares suites and reports differences by suite.rc section and item. Note that some differences may be due to suite-name-specific defaults that are not explicitly configured in either suite.

6.9 Validating A Suite

Suite validation checks for errors by parsing the suite definition, comparing all items against the suite.rc specification file, and then parsing the suite graph and attempting to instantiate all task proxy objects. This can be done using the cylc GUIs or cylc validate:

% cylc validate -v foo.bar 
Parsing Suite Definition 
LOADING suite.rc 
VALIDATING against the suite.rc specification. 
PARSING clock-triggered tasks 
PARSING runtime generator expressions 
PARSING runtime hierarchies 
PARSING SUITE GRAPH 
Instantiating Task Proxies: 
root 
 |-GEN 
 | |-OPS 
 | | |-aircraft    ... OK 
 | | |-atovs       ... OK 
 | | ‘-atovs_post  ... OK 
 | ‘-VAR 
 |   |-AnPF        ... OK 
 |   ‘-ConLS       ... OK 
 |-baz 
 | |-bar1          ... OK 
 | ‘-bar2          ... OK 
 |-foo             ... OK 
 ‘-prepobs         ... OK 
Suite foo.bar is valid for cylc-4.2.0

For more information on suite validation see Section 8.2.5.

6.10 Other Example Suites

Cylc has been designed from the ground up to make prototyping and testing new suites very easy. Simply drawing (in text) a dependency graph in a new suite definition creates a valid suite that you can run: the tasks will be dummy tasks that default to emitting an identifying message, sleeping for a few seconds, and then exiting; but you can then arrange for particular tasks to do more complex things by configuring their runtime properties appropriately.

Cylc has example suites intended to illustrate most facets of suite construction. These are held centrally under $CYLC_DIR/examples and can be imported to your suite database by running ’cylc admin import-examples’. They all run “out the box” and can be copied and modified by users to test almost anything. Some of them just configure a suite dependency graph, in which case cylc will run dummy tasks according to the graph; some also configure task runtime properties (e.g. command scripting and environment variables) within the suite definition; and some have real task implementations that generate and consume real files and which will fail if they are not executed in the right order. All of the example suites are portable in the sense that all suite and task I/O directory paths incorporate the suite registration name (this is the default situation for any cylc suite in fact) so you can run multiple copies of the same suite at once without any interference between them.

 
title = "Quick Start Example Z" 
description = "(Example A without the visualization config)" 
 
[scheduling] 
    initial cycle time = 2011010106 
    final cycle time = 2011010200 
    [[special tasks]] 
        start-up        = Prep 
        clock-triggered = GetData(1) 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph  = """Prep => GetData => Model => PostA 
                        Model[T-6] => Model""" 
        [[[6,18]]] 
            graph = "Model => PostB"

(This suite is explained in the Quick Start Guide, Section 6).

6.10.1 Choosing The Initial Cycle Time

When running a suite in live mode that contains clock-triggered tasks do not give an initial cycle time in the future or nothing will happen until that time. However, you can also run any suite in simulation mode or dummy mode in which case a future start time is fine (see Appendix 11.13).

7 Suite Registration

 7.1 Suite Databases
 7.2 Database Operations

Cylc commands target particular suites via names registered in a suite database, so that you don’t need to remember and continually refer to the actual location of the suite definition on disk. A suite registration name is a hierarchical name akin to a directory path but delimited by the ‘.’ character; this allows suites to be organised in nested tree-like structures:

% cylc db print -t nwp 
nwp 
 |-oper 
 | |-region1  Local Model Region1       /oper/nwp/suite_defs/LocalModel/nested/Region1 
 | ‘-region2  Local Model Region2       /oper/nwp/suite_defs/LocalModel/nested/Region2 
 ‘-test 
   ‘-region1  Local Model TEST Region1  /home/oliverh/nwp_suites/Regional/TESTING/Region1

Note that registration groups are entirely virtual, they do not need to be explicitly created before use, and they automatically disappear if all tasks are removed from them. From the listing above, for example, to move the suite nwp.oper.region2 into the nwp.test group:

% cylc db rereg nwp.oper.region2 nwp.test.region2 
REREGISTER nwp.oper.region2 to nwp.test.region2 
% cylc db print -tx nwp 
nwp 
 |-oper 
 | ‘-region1  Local Model Region1 
 ‘-test 
   |-region1  Local Model TEST Region1 
   ‘-region2  Local Model Region2

And to move nwp.test.region2 into a new group nwp.para:

% cylc db rereg nwp.test.region2 nwp.para.region2 
REREGISTER nwp.test.region2 to nwp.para.region2 
% cylc db print -tx nwp 
nwp 
 |-oper 
 | ‘-region1  Local Model Region1 
 |-test 
 | ‘-region1  Local Model TEST Region1 
 ‘-para 
   ‘-region2  Local Model Region2

Currently you cannot explicitly indicate a group name on the command line by appending a dot character. Rather, in database operations such as copy, reregister, or unregister, the identity of the source item (group or suite) is inferred from the content of the database; and if the source item is a group, so must the target be a group (or it will be, in the case of an item that will be created by the operation). This means that you cannot copy a single suite into a group that does not exist yet unless you specify the entire target registration name.

cylc db register --help shows a number of other examples.

7.0.2 Suite Passphrases

Any client process that connects to a running suite (this includes task messaging and user-invoked interrogation and control commands) must authenticate with a secure passphrase that has been loaded by the suite. A random passphrase is generated automatically in the suite definition directory at registration time, if one does not already exist there. For the default Pyro-based connection method the passphrase file must be distributed to any other accounts that host running tasks or from which you need monitoring or control access to the running suite. Alternatively, an ssh-based communication method can be used to automatically re-invoke cylc commands, including task messaging, on the suite host, in which case the suite passphrase is only needed on the suite host. See Section 11.1.1 for more on how cylc’s client/server communication works and how to use it.

7.1 Suite Databases

Each user has a suite database that associates registered suite names with their respective suite definition directory locations. Hierarchical suite names stored in the database can be displayed in a tree structure. By right-clicking on a suite in your database, from within the db viewer, or using cylc commands, you can:

  1. start a suite control GUI to run the suite (or connect to a running suite),
  2. submit a single task to run, just as it would be submitted by its suite
  3. view the suite stdout and stderr streams,
  4. view the suite log (which records all events and messages from tasks),
  5. retrieve the suite description,
  6. list tasks in the suite,
  7. view the suite namespace hierarchy,
  8. edit the suite definition in your editor,
  9. plot the suite dependency graph,
  10. search the suite definition and bin scripts,
  11. validate the suite definition,
  12. copy the suite or group,
  13. alias the suite name to another name,
  14. compare (difference) the suite with another suite,
  15. unreregister the suite or group,
  16. reregister the suite or group.

Note that the suite title shown in the db viewer is parsed from the suite.rc file at the time of initial registration; if you change the title (by editing the suite.rc file) use cylc db refresh or the db viewer’s View Refresh to update the database.

The user suite database file is $HOME/.cylc/DB.

7.2 Database Operations

On the command line, the ‘database’ (or ‘db’) command category contains commands to implement the aforementioned operations.

% cylc db help 
CATEGORY: db|database - Suite registration, copying, deletion, etc. 
 
HELP: cylc [db|database] COMMAND help,--help 
  You can abbreviate db|database and COMMAND. 
  The category db|database may be omitted. 
 
COMMANDS: 
  alias ............... Register an alternative name for a suite 
  copy|cp ............. Copy a suite or a group of suites 
  dbviewer ............ GUI to view registered suites and operate on them. 
  get-directory ....... Retrieve suite definition directory paths 
  print ............... Print registered suites 
  refresh ............. Report invalid registrations and update suite titles 
  register ............ Register a suite for use 
  reregister|rename ... Change the name of a suite 
  unregister .......... Unregister and optionally delete suites

Groups of suites (at any level in the registration hierarchy) can be deleted, copied, imported, and exported; as well as individual suites. To do this, just use suite group names as source and/or target for operations, as appropriate. For instance, if a group foo.bar contains the suites foo.bar.baz and foo.bar.qux, you can copy a single suite like this:

% cylc copy foo.bar.baz boo $HOME/suites

(resulting in a new suite boo); or the group like this:

% cylc copy foo.bar boo $HOME/suites

(resulting in new suites boo.baz and boo.qux); or the group like this:

% cylc copy foo boo $HOME/suites

(resulting in new suites boo.bar.baz and boo.bar.qux). When suites are copied, the suite definition directories are copied into a directory tree, under the target directory, that reflects the registration name hierarchy. cylc copy --help has some explicit examples.

The same functionality is also available by right-clicking on suites or suite groups in the db viewer GUI, as shown in Figure 11.

8 Suite Definition

 8.1 Suite Definition Directories
 8.2 Suite.rc File Overview
 8.3 Scheduling - Dependency Graphs
 8.4 Runtime - Task Configuration
 8.5 Visualization
 8.6 Jinja2 Suite Templates
 8.7 Special Placeholder Variables
 8.8 Omitting Tasks At Runtime
 8.9 Naked Dummy Tasks And Strict Validation

Cylc suites are defined in structured, validated, suite.rc files that concisely specify the properties of, and the relationships between, the various tasks managed by the suite. This section of the User Guide deals with the format and content of the suite.rc file, including task definition. Task implementation - what’s required of the real commands, scripts, or programs that do the processing that the tasks represent - is covered in Section 9; and task job submission - how tasks are submitted to run - is in Section 10.

8.1 Suite Definition Directories

A cylc suite definition directory contains:

A typical example:

/path/to/my/suite   # suite definition directory 
    suite.rc           # THE SUITE DEFINITION FILE 
    bin/               # scripts and executables used by tasks 
        foo.sh 
        bar.sh 
        ... 
    # (OPTIONAL) any other suite-related files, for example: 
    inc/               # suite.rc include-files 
        nwp-tasks.rc 
        globals.rc 
        ... 
    doc/               # documentation 
    control/           # control files 
    ancil/             # ancillary files 
    ...

8.2 Suite.rc File Overview

Suite.rc files conform to the ConfigObj extended INI format (http://www.voidspace.org.uk/python/configobj.html) with several modifications to allow continuation lines and include-files, and to make it legal to redefine environment variables and scheduler directives (duplicate config item definitions are normally flagged as an error).

Additionally, embedded template processor expressions may be used in the file, to programatically generate the final suite definition seen by cylc. Currently the Jinja2 template engine is supported (http://jinja.pocoo.org/docs). In the future cylc may provide a plug-in interface to allow use of other template engines too. See Jinja2 Suite Templates (Section 8.6) for some examples.

8.2.1 Syntax

The following list shows legal raw suite.rc syntax. Suites using the Jinja2 template processor (Section 8.6) can of course use Jinja2 syntax as well (it must generate raw syntax on processing).

The following pseudo-listing illustrates suite.rc syntax:

# a full line comment 
an item = value # a trailing comment 
a boolean item = True # or False 
one string item = the quick brown fox # string quotes optional ... 
two string item = "the quick, brown fox" # ... unless internal commas 
a multiline string item = """the quick brown fox 
jumped over the lazy dog""" # triple quoted 
a list item = foo, bar, baz   # comma separated 
a list item with continuation = a, b, c, \ 
                                d, e, f 
[section] 
    item = value 
%include inc/vars/foo.inc  # include file 
    [[subsection]] 
        item = value 
        [[[subsubsection]]] 
            item = value 
[another section] 
    [[another subsection]] 
        # ... 
    # ... 
# ...

8.2.2 Include-Files

Cylc has native support for suite.rc include-files, which may help to organize large suites. Inclusion boundaries are completely arbitrary - you can think of include-files as chunks of the suite.rc file simply cut-and-pasted into another file. Include-files may be included multiple times in the same file, and even nested. Include-file paths can be specified portably relative to the suite definition directory, e.g.:

# SUITE.RC 
# include the file $CYLC_SUITE_DEF_PATH/inc/foo.rc: 
%include inc/foo.rc

Editing Temporarily Inlined Suites Cylc’s native file inclusion mechanism supports optional inlined editing:

% cylc edit --inline SUITE

The suite will be split back into its constituent include-files when you exit the edit session. While editing, the inlined file becomes the official suite definition so that changes take effect whenever you save the file. See cylc prep edit --help for more information.

Include-Files via Jinja2 Jinja2 (Section 8.6) provides template inclusion functionality although this is more akin to Python module import than simple text inclusion, and the implications of this for suite design have not yet been explored.

8.2.3 Syntax Highlighting In Vim and Emacs

Cylc comes with a syntax file to configure suite.rc syntax highlighting and section folding in the vim editor, as shown in Figure 12, and an emacs font-lock mode. Both are stored under the cylc conf directory:

$CYLC_DIR/conf/cylc.vim 
$CYLC_DIR/conf/cylc-mode.el

Refer to comments at the top of each file to see how to use them.

8.2.4 Gross File Structure

Cylc suite.rc files consist of a suite title and description followed by configuration items grouped under several top level section headings:

8.2.5 Validation

Cylc suite.rc files are automatically validated against a specification that defines all legal entries, values, options, and defaults (held in $CYLC_DIR/conf/suiterc/). This detects any formatting errors, typographic errors, illegal items and illegal values prior to run time. Some values are complex strings that require further parsing by cylc to determine their correctness (this is also done during validation). All legal entries are documented in the Suite.rc Reference (Appendix A).

The validator reports the line numbers of detected errors. Here’s an example showing a subsection heading with a missing right bracket.

% cylc validate foo.bar 
Parsing Suite Config File 
ERROR: [[special tasks] 
NestingError('Cannot compute the section depth at line 19.',) 
_validate foo.bar failed:  1

If the suite.rc file contains include-files you can use cylc view to view an inlined copy with correct line numbers (you can also edit suites in a temporarily inlined state with cylc edit --inline).

Validation does not check the validity of chosen job submission methods; this is to allow users to extend cylc with their own job submission methods, which are by definition unknown to the suite.rc spec.

8.3 Scheduling - Dependency Graphs

The [scheduling] section of a suite.rc file defines the relationships between tasks in a suite - the information that allows cylc to determine when tasks are ready to run. The most important component of this is the suite dependency graph. Cylc graph notation makes clear textual graph representations that are very concise because sections of the graph that repeat at different hours of the day, say, only have to be defined once. Here’s an example with dependencies that vary depending on cycle time:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] # validity (hours of the day) 
            graph = """ 
A => B & C   # B and C trigger off A 
A[T-6] => A  # Model A restart trigger 
                    """ 
        [[[6,18]]] 
            graph = "C => X"

Figure 24 shows the complete suite.rc listing alongside the suite graph. This is actually a complete, valid, runnable suite (it will use default runtime properties and command scripting and you’ll need to trigger task A manually to get the suite started because A[T] depends on A[T-6] and at start-up there is no previous cycle to satisfy that dependence - how to handle this properly is described in Handling Intercycle Dependencies At Start-Up (Section 8.3.5) and in the Quick Start Guide (Section 6).


# SUITE.RC 
title = "Dependency Graph Example" 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] # validity (hours) 
            graph = """ 
A => B & C   # B and C trigger off A 
A[T-6] => A  # Model A restart trigger 
                    """ 
        [[[6,18]]] # hours 
            graph = "C => X" 
[visualization] 
    [[node attributes]] 
        X = "color=red"

PIC


Figure 24: Example Suite


8.3.1 Graph String Syntax

Multiline graph strings may contain:

8.3.2 Interpreting Graph Strings

Suite dependency graphs can be broken down into pairs in which the left side (which may be a single task or family, or several that are conditionally related) defines a trigger for the task or family on the right. For instance the “word graph” C triggers off B which triggers off A can be deconstructed into pairs C triggers off B and B triggers off A. In this section we use only the default trigger type, which is to trigger off the upstream task succeeding; see Section 8.3.4 for other available triggers.

In the case of cycling tasks, the triggers defined by a graph string are valid for cycle times matching the list of hours specified for the graph section. For example this graph,

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "A => B"

implies that B triggers off A for cycle times in which the hour matches 0 or 12.

To define intercycle dependencies, attach an offset indicator to the left side of a pair:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "A[T-12] => B"

This means B[T] triggers off A[T-12] for cycle times T with hours matching 0 or 12. Note that T must be left implicit unless there is a cycle time offset (this helps to keep graphs clean and concise because the majority of tasks in a typical suite will only depend on others with the same cycle time) and that cycle time offsets can only appear on the left (because each pair defines a trigger for the right task at cycle time T).

Now, having explained that dependency graphs are interpreted pairwise, you can optionally chain pairs together to “follow a path” through the graph. So this,

# SUITE.RC 
    graph = """A => B  # B triggers off A 
               B => C  # C triggers off B"""

is equivalent to this:

# SUITE.RC 
    graph = "A => B => C"

Cycle time offsets, if they appear in a chain of triggers, must be leftmost (because, as explained previously they can’t appear on the right of any pair). So this is legal:

# SUITE.RC 
    graph = "A[T-6] => B => C"  # OK

but this isn’t:

# SUITE.RC 
    graph = "A => B[T-6] => C"  # ERROR!

The trigger A => B[T-6] does not make sense in any case - if this kind of relationship seems necessary it probably means that B should be “reassigned” to the next cycle (keep in mind that cycle time is really just a label used to define the relatonships between tasks).

Each trigger in the graph must be unique but the same task can appear in multiple pairs or chains. Separately defined triggers for the same task have an AND relationship. So this:

# SUITE.RC 
    graph = """A => X  # X triggers off A 
               B => X  # X also triggers off B"""

is equivalent to this:

# SUITE.RC 
    graph = "A & B => X"  # X triggers off A AND B

In summary, the branching tree structure of a dependency graph can be partitioned into lines (in the suite.rc graph string) of pairs or chains, in any way you like, with liberal use of internal white space and comments to make the graph structure as clear as possible.

# SUITE.RC 
# B triggers if A succeeds, then C and D trigger if B succeeds: 
    graph = "A => B => C & D" 
# which is equivalent to this: 
    graph = """A => B => C 
               B => D""" 
# and to this: 
    graph = """A => B => D 
               B => C""" 
# and to this: 
    graph = """A => B 
               B => C 
               B => D""" 
# and it can even be written like this: 
    graph = """A => B # blank line follows: 
 
               B => C # comment ... 
               B => D"""

Handling Long Graph Lines Long chains of dependencies can be split into pairs:

# SUITE.RC 
    graph = "A => B => C" 
# is equivalent to this: 
    graph = """A => B 
               B => C""" 
# BUT THIS IS AN ERROR: 
    graph = """A => B => # WRONG! 
               C"""      # WRONG!

If you have very long task names, or long conditional trigger expressions (below) then you can use the suite.rc line continuation marker:

# SUITE.RC 
    graph = "A => B \ 
    => C"  # OK

Note that a line continuation marker must be the final character on the line; it cannot be followed by trailing spaces or a comment.

8.3.3 Graph Types (VALIDITY)

A suite definition can contain multiple graph strings that are combined to generate the final graph. There are different graph VALIDITY section headings for cycling, one-off asynchronous, and repeating asynchronous tasks. Additionally, there may be multiple graph strings under different VALIDITY sections for cycling tasks with different dependencies at different cycle times.

One-off Asynchronous Tasks Figure 25 shows a small suite of one-off asynchronous tasks; these have no associated cycle time and don’t spawn successors (once they’re all finished the suite just exits). The integer 1 attached to each graph node is just an arbitrary label, akin to the task cycle time in cycling tasks; it increments when a repeating asynchronous task (below) spawns.


# SUITE.RC 
title = some one-off asynchronous tasks 
[scheduling] 
    [[dependencies]] 
        graph = "foo => bar & baz => qux"

PIC


Figure 25: One-off Asynchronous Tasks.


Cycling Tasks For cycling tasks the graph VALIDITY section heading defines a sequence of cycles times for which the subsequent graph section is valid. Figure 26 shows a small suite of cycling tasks.


# SUITE.RC 
title = some cycling tasks 
# (no dependence between cycles here) 
[scheduling] 
    [[dependencies]] 
        [[[0,12]]] 
            graph = "foo => bar & baz => qux"

PIC


Figure 26: Cycling Tasks.


Stepped Daily, Monthly, And Yearly Cycling In addition to the original hours-of-the-day section headings, cylc now has an extensible cycling mechanism and (so far) stepped daily, monthly, and yearly cycling modules:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[Daily(20100809,2)]]] 
            graph = "foo => bar" 
        [[[Monthly(201008,2)]]] 
            graph = "cat[T-2] => dog" 
        [[[Yearly(2010,2)]]] 
            graph = "apple => orange"

In the examples above the section headings define an infinite sequence of cycle times anchored on the first (date-time) argument and stepped by the second (integer) argument. The anchoring serves to generate the same sequence, as opposed to some off-set sequence, regardless of the initial cycle time from which the suite is started. The anchor date can lie outside of the suite’s initial and final cycle times.

Note that hours-of-the-day graph section headings can also be written to explicitly reference the associated cycling module:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[HoursOfTheDay(0,6,12,18)]]] # same as [[[0,6,12,18]]] 
            graph = "red => blue"

How Multiple Graph Strings Combine For a cycling graph with multiple validity sections for different hours of the day, the different sections add to generate the complete graph. Different graph sections can overlap (i.e. the same hours may appear in multiple section headings) and the same tasks may appear in multiple sections, but individual dependencies should be unique across the entire graph. For example, the following graph defines a duplicate prerequisite for task C:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "A => B => C" 
        [[[6,18]]] 
            graph = "B => C => X" 
            # duplicate prerequisite: B => C already defined at 6, 18

This does not affect scheduling, but for the sake of clarity and brevity the graph should be written like this:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "A => B => C" 
        [[[6,18]]] 
            # X triggers off C only at 6 and 18 hours 
            graph = "C => X"

Combined Asynchronous And Synchronous Graphs Cycling tasks can be made to wait on one-off asynchronous tasks, as shown in Figure 27. Alternatively, they can be made to wait on one-off synchronous start-up tasks, which have an associated cycle time even though they are non-cycling - see Figure 28.

Synchronous Start-up vs One-off Asynchronous Tasks One-off synchronous start-up tasks run only when a cycling suite is cold-started and they are often associated with subsequent one-off cold-start tasks used to bootstrap a cycling suite into existence.

The distinction between cold- and warm-start is only meaningful for cycling tasks, and one-off asynchronous tasks may be best used in constructing entirely non-cycling suites.

However, one-off asynchronous tasks can precede cycling tasks in the same suite, as shown above. It seems likely that, if used in this way, they will be intended as start-up tasks - so currently one-off asynchronous tasks only run in a cold-start.


# SUITE.RC 
title = one-off async and cycling tasks 
# (with dependence between cycles too) 
[scheduling] 
    [[dependencies]] 
        graph = "prep1 => prep2" 
        [[[0,12]]] 
            graph = """ 
    prep2 => foo => bar & baz => qux 
    foo[T-12] => foo 
                    """

PIC


Figure 27: One-off asynchronous and cycling tasks in the same suite.



# SUITE.RC 
title = one-off start-up and cycling tasks 
# (with dependence between cycles too) 
[scheduling] 
    [[special tasks]] 
        start-up = prep1, prep2 
    [[dependencies]] 
        [[[0,12]]] 
            graph = """ 
    prep1 => prep2 => foo => bar & baz => qux 
    foo[T-12] => foo 
                    """

PIC


Figure 28: One-off synchronous and cycling tasks in the same suite.


Repeating Asynchronous Tasks Repeating asynchronous tasks can be used, for example, to process satellite data that arrives at irregular time intervals. Each new dataset must have a unique “asynchronous ID”. If it doesn’t naturally have such an ID a string representation of the data arrival time could be used. The graph VALIDITY section heading must contain “ASYNCID:” followed by a regular expression that matches the actual IDs. Additionally, one task in the suite must be a designated “daemon” that waits indefinitely on incoming data and reports each new dataset (and its ID) back to the suite by means of a special output message. When the daemon task proxy receives a matching message it dynamically registers a new output (containing the ID) that downstream tasks can then trigger off. The downstream tasks likewise have prerequisites containing the ID pattern (because they trigger off the aforementioned outputs) and when these get satisfied during dependency negotiation the actual ID is substituted into their own registered outputs. Finally, each asynchronous repeating task proxy passes the ID to its task execution environment as $ASYNCID to allow identification of the correct dataset by task scripts. In this way a tree of tasks becomes dedicated to processing each new dataset, and multiple datasets can be processed in parallel if they become available in quick succession. As Figure 29 shows, a repeating asynchronous suite currently plots just like a one-off asynchronous suite. But at run time the daemon task stays put, while the others continually spawn successors to wait for new datasets to come in. The asynchronous.repeating example suite demonstrates how to do this in a real suite. Note that other trigger types (success, failure, start, suicide, and conditional) cannot currently be used in a repeating asynchronous graph section.


# SUITE.RC 
title = a suite of repeating asynchronous tasks 
# for processing real time satellite datasets 
[scheduling] 
    [[dependencies]] 
        [[[ASYNCID:satX-\d{6}]]] 
            # match datasets satX-1424433 (e.g.) 
            graph = "watcher:a => foo:a & bar:a => baz" 
            daemon = watcher 
[runtime] 
    [[watcher]] 
        [[[outputs]]] 
            a = "New dataset <ASYNCID> ready for processing" 
    [[foo,bar]] 
        [[[outputs]]] 
            a = "Products generated from dataset <ASYNCID>"

PIC


Figure 29: Repeating Asynchronous Tasks.


8.3.4 Trigger Types

Trigger type, indicated by :type after the upstream task (or family) name, determines what kind of event results in the downstream task (or family) triggering.

Success Triggers The default, with no trigger type specified, is to trigger off the upstream task succeeding:

# SUITE.RC 
# B triggers if A SUCCEEDS: 
    graph = "A => B"

For consistency and completeness, however, the success trigger can be explicit:

# SUITE.RC 
# B triggers if A SUCCEEDS: 
    graph = "A => B" 
# or: 
    graph = "A:succeed => B"

Failure Triggers To trigger off the upstream task reporting failure:

# SUITE.RC 
# B triggers if A FAILS: 
    graph = "A:fail => B"

Section 8.3.4.8 (Suicide Triggers) shows one way of handling task B here if A does not fail.

Start Triggers To trigger off the upstream task starting to execute:

# SUITE.RC 
# B triggers if A STARTS EXECUTING: 
    graph = "A:start => B"

This can be used to trigger tasks that monitor other tasks once they (the target tasks) start executing. Consider a long-running forecast model, for instance, that generates a sequence of output files as it runs. A postprocessing task could be launched with a start trigger on the model (model:start => post) to process the model output as it becomes available. Note, however, that there are several alternative ways of handling this scenario: both tasks could be triggered at the same time (foo => model & post), but depending on external queue delays this could result in the monitoring task starting to execute first; or a different postprocessing task could be triggered off an internal output for each data file (model:out1 => post1 etc.; see Section 8.3.4.5), but this may not be practical if the number of output files is large or if it is difficult to add cylc messaging calls to the model.

Finish Triggers To trigger off the upstream task succeeding or failing, i.e. finishing one way or the other:

# SUITE.RC 
# B triggers if A either SUCCEEDS or FAILS: 
    graph = "A | A:fail => B" 
# or 
    graph = "A:finish => B"

Internal Triggers These are only required to trigger off events that occur before a task finishes.

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        [[[6,18]]] 
            # B triggers off internal output "upload1" of task A: 
            graph = "A:upload1 => B" 
[runtime] 
    [[A]] 
        [[[outputs]]] 
            upload1 = "NWP products uploaded for [T]"

Task A must emit this message when the actual output has been completed - see Reporting Internal Outputs Completed (Section 9.4.2).

Intercycle Triggers Typically most tasks in a suite will trigger off other cotemporal (i.e. the same cycle time) tasks, but some may depend on tasks with earlier cycle times. This notably applies to warm-cycled forecast models, which depend on their own previous instances (see below); but other kinds of intercycle dependence are possible too.5 Here’s how to express this kind of relationship in cylc:

# SUITE.RC 
[dependencies] 
    [[0,6,12,18]] 
        # B triggers off A in the previous cycle 
        graph = "A[T-6] => B"

Intercycle and trigger type (and internal output) notation can be combined:

# SUITE.RC 
    # B triggers if A in the previous cycle fails: 
    graph = "A[T-6]:fail => B"

Bootstrapping Intercycle Triggers Tasks with intercycle triggers require an associated cold-start task to bootstrap them into operation when the suite is cold-started, because they depend on a previous cycle that does not exist at start time. Otherwise the first such task will require manual triggering (and that will only suffice if the real task does not have real previous-cycle dependence in the first cycle). Section 8.3.5, Handling Intercycle Dependence At Start-Up, explains how to use cold-start tasks in cylc.

Conditional Triggers AND operators (&) can appear on both sides of an arrow. They provide a concise alternative to defining multiple triggers separately:

# SUITE.RC 
# 1/ this: 
    graph = "A & B => C" 
# is equivalent to: 
    graph = """A => C 
               B => C""" 
# 2/ this: 
    graph = "A => B & C" 
# is equivalent to: 
    graph = """A => B 
               A => C""" 
# 3/ and this: 
    graph = "A & B => C & D" 
# is equivalent to this: 
    graph = """A => C 
               B => C 
               A => D 
               B => D"""

OR operators (|) which result in true conditional triggers, can only appear on the left,6

# SUITE.RC 
# C triggers when either A or B finishes: 
    graph = "A | B => C"

Forecasting suites typically have simple conditional triggering requirements, but any valid conditional expression can be used, as shown in Figure 30 (conditional triggers are plotted with open arrow heads).


# SUITE.RC 
        graph = """ 
# D triggers if A or (B and C) succeed 
A | B & C => D 
# just to align the two graph sections 
D => W 
# Z triggers if (W or X) and Y succeed 
(W|X) & Y => Z 
                """

PIC


Figure 30: Conditional triggers are plotted with open arrow heads.


Suicide Triggers Suicide triggers take tasks out of the suite. This can be used for automated failure recovery. The suite.rc listing and accompanying graph in Figure 31 show how to define a chain of failure recovery tasks that trigger if they’re needed but otherwise remove themselves from the suite (you can run the AutoRecover.async example suite to see how this works). The dashed graph edges ending in solid dots indicate suicide triggers, and the open arrowheads indicate conditional triggers as usual.


# SUITE.RC 
title = asynchronous automated recovery 
description = """ 
Model task failure triggers diagnosis 
and recovery tasks, which take themselves 
out of the suite if model succeeds. Model 
post processing triggers off model OR 
recovery tasks. 
              """ 
[scheduling] 
    [[dependencies]] 
        graph = """ 
pre => model 
model:fail => diagnose => recover 
model => !diagnose & !recover 
model | recover => post 
                """ 
[runtime] 
    [[model]] 
        # UNCOMMENT TO TEST FAILURE: 
        # command scripting = /bin/false

PIC


Figure 31: Automated failure recovery via suicide triggers.


Note that multiple suicide triggers combine in the same way as other triggers, so this:

foo => !baz 
bar => !baz

is equivalent to this:

foo & bar => !baz

i.e. both foo and bar must succeed for baz to be taken out of the suite. If you really want a task to be taken out if any one of several events occurs then be careful to write it that way:

foo | bar => !baz

Family Triggers Families defined by the namespace inheritance hierarchy (Section 8.4) can be used in the graph trigger whole groups of tasks at the same time (e.g. forecast model ensembles and groups of tasks for processing different observation types at the same time) and for triggering downstream tasks off families as a whole. Higher level families, i.e. families of families, can also be used, and are reduced to the lowest level member tasks. Note that tasks can also trigger off individual family members if necessary.

To trigger an entire task family at once:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        graph = "foo => fam" 
[runtime] 
    [[fam]]    # a family (because others inherit from it) 
    [[m1,m2]]  # family members (inherit from namespace fam) 
        inherit = fam

This is equivalent to:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        graph = "foo => m1 & m2" 
[runtime] 
    [[fam]] 
    [[m1,m2]] 
        inherit = fam

To trigger other tasks off families we have to specify whether to triggering off all members starting, succeeding, failing, or finishing, or off any members (doing the same). Legal family triggers are thus:

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        graph = """ 
      # all-member triggers: 
    fam:start-all => one 
    fam:succeed-all => one 
    fam:fail-all => one 
    fam:finish-all => one 
      # any-member triggers: 
    fam:start-any => one 
    fam:succeed-any => one 
    fam:fail-any => one 
    fam:finish-any => one 
                """

Here’s how to trigger downstream processing after if one or more family members succeed, but only after all members have finished (succeeded or failed):

# SUITE.RC 
[scheduling] 
    [[dependencies]] 
        graph = """ 
    fam:finish-all & fam:succeed-any => foo 
                """

8.3.5 Handling Intercycle Dependence At Start-Up

In suites with intercycle dependence some kind of bootsrapping process is required to get the suite going initially. In the example shown in Intercycle Triggers (Section 8.3.4.6), for instance, in the very first cycle there is no previous instance of task A to satisfy B’s prerequisites.

Cold-Start Tasks A cold-start task is a special one-off task used to satisfy the initial previous-cycle dependence of another cotemporal task. In effect, the cold-start task masquerades as the previous-cycle trigger of its associated cycling task.

A cold-start task may invoke real processing to generate the files that would normally be produced by the associated cycling task; or it could be a dummy task that represents some external spinup process that presumably generates in the same files but which has to be completed before the suite is started. In the latter case the cold-start task can just report itself successfully completed after checking that the required files are present.

This kind of relationship can easily be expressed with a conditional trigger:

# SUITE.RC 
[scheduling] 
    [[special tasks]] 
        cold-start = ColdFoo 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "ColdFoo | Bar[T-6] => Foo"

i.e. Foo[T] can trigger off either Bar[T-6] or ColdFoo[T]. At start-up ColdFoo will do the job, and thereafter Bar[T-6] will do it.

Cold-start tasks can also be inserted into the suite at run time to cold-start just their associated cycling tasks, if a problem of some kind prevents continued normal cycling.

Warm-Starting A Suite Cold-start tasks have to be declared as such in the suite.rc “special tasks” section so that cylc knows they are one-off (non-spawning) tasks, but also because they play a critical role in suite warm-starts. A suite that has previously been running and then shut down can be warm-started at a particular cycle time, an alternative to restarting from a previous state (although restarting is preferred because a warm start is likely to involve re-running some tasks). A warm-start assumes the existence of a previous cycle (i.e. that any files from the previous cycle required by the new cycle are in place already) so cold-start tasks do not need to run but cylc itself does not know the details of the previous cycle (it does in a restart, but not in a warm-start) so it still has to solve the bootstrapping problem to get the suite started. It does this by starting the suite with designated cold start tasks in the succeeded state - in other words finished cold start tasks stand in for the previous finished cycle, rather than pretending to be a running previous cycle as they do in a cold-start.

8.3.6 Model Restart Dependencies

Warm cycled forecast models generate restart files, e.g. model background fields, that are required to initialize the next forecast (this is essentially the definition of “warm cycling”). In fact restart files will often be written for a whole series of subsequent cycles in case the next cycle (or the next and the next-next, and so on) cycle has to be omitted:

# SUITE.RC 
[scheduling] 
    [[special tasks]] 
        sequential = A 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            # Model A cold-start and restart dependencies: 
            graph = "ColdA | A[T-6] | A[T-12] | A[T-18] | A[T-24] => A"

In other words, task A can trigger off a cotemporal cold-start task, or off its own previous instance, or off the instance before that, and so on. Restart dependencies are unusual because although A could trigger off A[T-12] we don’t actually want it to do so unless A[T-6] fails and can’t be fixed. This is why Task A, above, is declared to be ‘sequential’.7 Sequential tasks do not spawn a successor until they have succeeded (by default, tasks spawn as soon as they start running in order to get maximum functional parallelism in a suite) which means that A[T+6] will not be waiting around to trigger off an older predecessor while A[T] is still running. If A[T] fails though, the operator can force it, on removal, to spawn A[T+6], whose restart dependencies will then automatically be satisfied by the older instance, A[T-6].

Forcing a model to run sequentially means, of course, that its restart dependencies cannot be violated anyway, so we might just ignore them. This is certainly an option, but it should be noted that there are some benefits to having your suite reflect all of the real dependencies between the tasks that it is managing, particularly for complex multi-model operational suites in which the suite operator might not be an expert on the models. Consider such a suite in which a failure in a driving model (e.g. weather) precludes running one or more cycles of the downstream models (sea state, storm surge, river flow, …). If the real restart dependencies of each model are known to the suite, the operator can just do a recursive purge to remove the subtree of all tasks that can never run due to the failure, and then cold-start the failed driving model after a gap (skipping as few cycles as possible until the new cold-start input data are available). After that the downstream models will kick off automatically so long as the gap is spanned by their respective restart files, because their restart dependencies will automatically be satisfied by the older pre-gap instances in the suite. Managing this kind of scenario manually in a complex suite can be quite difficult.

Finally, if a warm cycled model is declared to have explicit restart outputs, and is not declared to be sequential, and you define appropriate labeled restart outputs which must contain the word ‘restart’, then the task will spawn as soon its last restart output is completed so that successives instances of the task will be able to overlap (i.e. run in parallel) if the opportunity arises. Whether or not this is worth the effort depends on your needs.

# SUITE.RC 
[scheduling] 
    [[special tasks]] 
        explicit restart outputs = A 
    [[dependencies]] 
        [[[0,6,12,18]]] 
            graph = "ColdA | A[T-18]:res18 | A[T-12]:res12| A[T-6]:res6 => A" 
[runtime] 
    [[A]] 
        [[[outputs]]] 
            r6  = restart files completed for [T+6] 
            r12 = restart files completed for [T+12] 
            r18 = restart files completed for [T+18]

8.4 Runtime - Task Configuration

The [runtime] section of a suite definition configures what to execute (and where and how to execute it) when each task is ready to run, in a multiple inheritance hierarchy of namespaces culminating in individual tasks. This allows all common configuration detail to be factored out and defined in one place.

Any namespace can configure any or all of the items defined in the Suite.rc Reference, Appendix A.

Namespaces that do not explicitly inherit from others automatically inherit from the root namespace (below).

Nested namespaces define task families that can be used in the graph as convenient shorthand for triggering all member tasks at once, or for triggering other tasks off all members at once - see Family Triggers, Section 8.3.4.9. Nested namespaces can be progressively expanded and collapsed in the dependency graph viewer, and in the gcylc graph and tree views. Only the first parent of each namespace (as for single-inheritance) is used for suite visualization purposes.

8.4.1 Namespace Names

Namespace names may contain letters, digits, underscores, and hyphens.

Note that task names need not be hardwired into task implementations because task and suite identity can be extracted portably from the task execution environment supplied by cylc (Section 8.4.7) - then to rename a task you can just change its name in the suite definition.

8.4.2 Root - Runtime Defaults

The root namespace, at the base of the inheritance hierarchy, provides default configuration for all tasks in the suite. Most root items are unset by default, but some have default values sufficient to allow test suites to be defined by dependency graph alone. The command scripting item, for example, defaults to code that prints a message then sleeps for between 1 and 15 seconds and exits. Default values are documented with each item in Appendix A. You can override the defaults or provide your own defaults by explicitly configuring the root namespace.

8.4.3 Defining Multiple Namespaces At Once

If a namespace section heading is a comma-separated list of names then the subsequent configuration applies to each list member. Particular tasks can be singled out at run time using the $CYLC_TASK_NAME variable.

As an example, consider a suite containing an ensemble of closely related tasks that each invokes the same script but with a unique argument that identifies the calling task name:

# SUITE.RC 
[runtime] 
    [[ensemble]] 
        command scripting = "run-model.sh $CYLC_TASK_NAME" 
    [[m1, m2, m3]] 
        inherit = ensemble

For large ensembles Jinja2 template processing can be used to automatically generate the member names and associated dependencies (see Section 8.6).

8.4.4 Runtime Inheritance - Single

The following listing of the inherit.single.one example suite illustrates basic runtime inheritance with single parents.

 
# SUITE.RC 
title = "User Guide [runtime] example." 
[cylc] 
    required run mode = simulation # (no task implementations) 
[scheduling] 
    initial cycle time = 2011010106 
    final cycle time = 2011010200 
    [[dependencies]] 
        graph = """foo => OBS 
             OBS:succeed-all => bar""" 
[runtime] 
    [[root]] # base namespace for all tasks (defines suite-wide defaults) 
        [[[job submission]]] 
            method = at_now 
        [[[environment]]] 
            COLOR = red 
    [[OBS]]  # family (inherited by land, ship); implicitly inherits root 
        command scripting = run-${CYLC_TASK_NAME}.sh 
        [[[environment]]] 
            RUNNING_DIR = $HOME/running/$CYLC_TASK_NAME 
    [[land]] # a task (a leaf on the inheritance tree) in the OBS family 
        inherit = OBS 
        description = land obs processing 
    [[ship]] # a task (a leaf on the inheritance tree) in the OBS family 
        inherit = OBS 
        description = ship obs processing 
        [[[job submission]]] 
            method = loadleveler 
        [[[environment]]] 
            RUNNING_DIR = $HOME/running/ship  # override OBS environment 
            OUTPUT_DIR = $HOME/output/ship    # add to OBS environment 
    [[foo]] 
        # (just inherits from root) 
 
    # The task [[bar]] is implicitly defined by its presence in the 
    # graph; it is also a dummy task that just inherits from root.

8.4.5 Runtime Inheritance - Multiple

If a namespace inherits from multiple parents the linear order of precedence (which namespace overrides which) is determined by the so-called C3 algorithm used to find the linear method resolution order for class hierarchies in Python and several other object oriented programming languages. The result of this should be fairly obvious for typical use of multiple inheritance in cylc suites, but for detailed documentation of how the algorithm works refer to the official Python documentation here: http://www.python.org/download/releases/2.3/mro/.

The inherit.multi.one example suite, listed here, makes use of multiple inheritance:

 
 
title = "multiple inheritance example" 
 
description = """To see how multiple inheritance works: 
 
 % cylc list -tb[m] SUITE # list namespaces 
 % cylc graph -n SUITE # graph namespaces 
 % cylc graph SUITE # dependencies, collapse on first-parent namespaces 
 
  % cylc get-config --sparse --item [runtime]ops_s1 SUITE 
  % cylc get-config --sparse --item [runtime]var_p2 foo""" 
 
[scheduling] 
    [[dependencies]] 
        graph = "OPS:finish-all => VAR" 
 
[runtime] 
    [[root]] 
    [[OPS]] 
        command scripting = echo "RUN: run-ops.sh" 
    [[VAR]] 
        command scripting = echo "RUN: run-var.sh" 
    [[SERIAL]] 
        [[[directives]]] 
            job_type = serial 
    [[PARALLEL]] 
        [[[directives]]] 
            job_type = parallel 
    [[ops_s1, ops_s2]] 
        inherit = OPS, SERIAL 
 
    [[ops_p1, ops_p2]] 
        inherit = OPS, PARALLEL 
 
    [[var_s1, var_s2]] 
        inherit = VAR, SERIAL 
 
    [[var_p1, var_p2]] 
        inherit = VAR, PARALLEL

cylc get-config provides an easy way to check the result of inheritance in a suite. You can extract specific items, e.g.:

% cylc get-config --item '[runtime][var_p2]command scripting' inherit.multi.one 
echo ‘‘RUN: run-var.sh''

or use the --sparse option to print entire namespaces without obscuring the result with the dense runtime structure obtained from the root namespace:

% cylc get-config --sparse --item '[runtime]ops_s1' inherit.multi.one 
command scripting = echo ‘‘RUN: run-ops.sh'' 
inherit = ['OPS', 'SERIAL'] 
[directives] 
   job_type = serial

8.4.6 How Runtime Inheritance Works

The linear precedence order of ancestors is computed for each namespace using the C3 algorithm. Then any runtime items that are explicitly configured in the suite definition are “inherited” up the linearized hierachy for each task, starting at the root namespace: if a particular item is defined at multiple levels in the hiearchy, the level nearest the final task namespace takes precedence. Finally, root namespace defaults are applied for every item that has not been configured in the inheritance process (this is more efficient than carrying the full dense namespace structure through from root from the beginning).

8.4.7 Task Execution Environment

The task execution environment contains suite and task identity variables provided by cylc, and user-defined environment variables. The environment is explicitly exported (by the task job script) prior to executing task command scripting (see Task Job Submission, Section 10).

Suite and task identity are exported first, so that user-defined variables can refer to them. Order of definition is preserved throughout so that variable assignment expressions can safely refer to previously defined variables.

Additionally, access to cylc itself is configured prior to the user-defined environment, so that variable assignment expressions can make use of cylc utility commands:

# SUITE.RC 
[runtime] 
    [[foo]] 
        [[[environment]]] 
            REFERENCE_TIME = $( cylc util cycletime --offset-hours=6 )

User Environment Variables A task’s user-defined environment results from its inherited [[[environment]]] sections:

# SUITE.RC 
[runtime] 
    [[root]] 
        [[[environment]]] 
            COLOR = red 
            SHAPE = circle 
    [[foo]] 
        [[[environment]]] 
            COLOR = blue  # root override 
            TEXTURE = rough # new variable

This results in a task foo with SHAPE=circle, COLOR=blue, and TEXTURE=rough in its environment.

Overriding Environment Variables When you override inherited namespace items the original parent item definition is replaced by the new definition. This applies to all items including those in the environment sub-sections which, strictly speaking, are not “environment variables” until they are written, post inheritance processing, to the task job script that executes the associated task. Consequently, if you override an environment variable you cannot also access the original parent value:

# SUITE.RC 
[runtime] 
    [[foo]] 
        [[[environment]]] 
            COLOR = red 
    [[bar]] 
        inherit = foo 
        [[[environment]]] 
            tmp = $COLOR        # !! ERROR: $COLOR is undefined here 
            COLOR = dark-$tmp   # !! as this overrides COLOR in foo.

The compressed variant of this, COLOR = dark-$COLOR, is also in error for the same reason. To achieve the desired result you must use a different name for the parent variable:

# SUITE.RC 
[runtime] 
    [[foo]] 
        [[[environment]]] 
            FOO_COLOR = red 
    [[bar]] 
        inherit = foo 
        [[[environment]]] 
            COLOR = dark-$FOO_COLOR  # OK

Suite And Task Identity Variables The task identity variables provided to tasks by cylc are:

$CYLC_TASK_ID                    # X.2011051118 (e.g.) 
$CYLC_TASK_NAME                  # X 
$CYLC_TASK_CYCLE_TIME            # 2011051118 
$CYLC_TASK_LOG_ROOT              # ~/cylc-run/foo.bar.baz/log/job/X.2011051118.1 
$CYLC_TASK_NAMESPACE_HIERARCHY   # "root postproc X" (e.g.) 
$CYLC_TASK_TRY_NUMBER            # increments with automatic retry-on-fail 
$CYLC_TASK_WORK_PATH             # task work directory (see below) 
$CYLC_SUITE_SHARE_PATH           # suite (or task!) shared directory (see below) 
$CYLC_TASK_IS_COLDSTART          # 'True' for cold-start tasks, else 'False'

And the suite identity variables are:

$CYLC_SUITE_DEF_PATH   # $HOME/mysuites/baz (e.g.) 
$CYLC_SUITE_REG_NAME   # foo.bar.baz (e.g.) 
$CYLC_SUITE_REG_PATH   # foo/bar/baz 
$CYLC_SUITE_HOST       # orca.niwa.co.nz (e.g.) 
$CYLC_SUITE_PORT       # 7766 (e.g.) 
$CYLC_SUITE_OWNER      # oliverh (e.g.)

The variable $CYLC_SUITE_REG_PATH is just $CYLC_SUITE_REG_NAME (the hierarchical name under which the suite definition is registered in your suite database) translated into a directory path. This can be used when configuring suite logging directories and the like to put suite output in a directory tree that reflects the suite registration hierarchy (as opposed to the namespace hierarchy).

Some of these variables are also used by cylc task messaging commands in order to automatically target the right task proxy object in the right suite.

Suite Share And Task Work Directories See the variable listing above, and Sections A.4.1.11.2 and A.4.1.11.3.

Task command scripting is executed from within a work directory created on the fly, if necessary, by the task’s job script. In non-detaching tasks the work directory is automatically removed again if it is empty before the job script exits.

The share directory is also created on the fly, if necessary, by the job script. It is intended as a shared data area for multiple tasks on the same host, but as for any task runtime config item it can be specialized to particular tasks or groups of tasks.

The code for creating these directories, and removing empty work directories, can be seen by examining a task job script.

Other Cylc-Defined Environment Variables Initial and final cycle times, if supplied via the suite.rc file or the command line, are passed to task execution environments as:

$CYLC_SUITE_INITIAL_CYCLE_TIME 
$CYLC_SUITE_FINAL_CYCLE_TIME

Running tasks can use these to determine whether or not they are running in the first or final cycles.

Environment Variable Evaluation Variables in the task execution environment are not evaluated in the shell in which the suite is running prior to submitting the task. They are written in unevaluated form to the job script that is submitted by cylc to run the task (Section 10.1) and are therefore evaluated when the task begins executing under the task owner account on the task host. Thus $HOME, for instance, evaluates at run time to the home directory of task owner on the task host.

8.4.8 Remote Task Hosting

If a task declares an owner other than the suite owner and/or a host other than the suit host, e.g.:

# SUITE.RC 
[runtime] 
    [[foo]] 
        [[[remote]]] 
            host = orca.niwa.co.nz 
            owner = bob 
            cylc directory = /path/to/remote/cylc/installation/on/foo 
            suite definition directory = /path/to/remote/suite/definition/on/foo

cylc will execute the task on the declared host, by the configured job submission method, as the declared owner, using passwordless ssh.

A local task to run under another user account is treated as a remote task.

You may not need this functionality if you have a cross-platform resource manager, such as loadleveler, that allows you to submit a job locally to run on the remote host.

Remote host functionality, like all namespace settings, can be declared globally (in the root namespace) or per family, or for individual tasks. Use the global settings if all or most of your tasks need to run on the same remote host.

If cylc is not in your $PATH on the remote host you can use the “cylc directory” item to give remote tasks access to cylc. However, the remote job submission mechanism automatically sources .profile if it exists, so you can also set your PATH for cylc there. Similarly tasks can use $CYLC_SUITE_DEF_PATH to access suite files on the task host, and the suite bin directory is automatically added $PATH. If a remote suite definition directory is not specified the local (suite host) path will be assumed with the local home directory, if present, substituted with literal $HOME for evaluation on the task host.

Dynamic Host Selection Instead of hardwiring task host names into the suite definition you can specify a shell back-tick expression, as the value of the host config item, which executes an external command (it may be a script located in the suite bin directory) that writes a hostname to stdout.

Remote Task Log Directories Task stdout and stderr streams are written to log files in a suite-specific sub-directory of the suite run directory, as explained in Section 10.3. For remote tasks the same directory is used, but on the task host. Remote task log directories, like local ones, are created on the fly, if necessary, during job submission.

8.5 Visualization

The visualization section of a suite definition is used to configure suite graphing, principally graph node (task) and edge (dependency arrow) style attributes. Tasks can be grouped for the purpose of applying common style attributes. See the suite.rc reference (Appendix A) for details.

8.5.1 Collapsible Families In Suite Graphs
# SUITE.RC 
[visualization] 
    collapsed families = family1, family2

Nested families from the namespace inheritance hierarchy, even if they are not used as family triggers in the graph, can be expanded or collapsed in suite graphs and in the suite control GUI’s text and graph views.

In the graph view ungraphed tasks, which includes the members of collapsed families, are automatically plotted as rectangular nodes to the right of the main graph if they are doing anything interesting (submitted, running, or failed).

Note that family relationships can be defined purely for visualization purposes - you can group tasks at root level in the inheritance hierarchy prior to defining real properties at higher levels.

Figure 32 illustrates successive expansion of nested task families in the namespaces example suite, which has the following namespace hierarchy:

% cylc list --tree cylc-x-y-z.namespaces 
root 
 |-GEN 
 | |-OPS 
 | | |-aircraft   OPS aircraft obs processing 
 | | |-atovs      OPS ATOVS obs processing 
 | | ‘-atovs_post OPS ATOVS postprocessing 
 | ‘-VAR 
 |   |-AnPF       runs VAR AnalysePF 
 |   ‘-ConLS      runs VAR ConfigureLS 
 |-baz 
 | |-bar1         Task bar1 of baz 
 | ‘-bar2         Task bar2 of baz 
 |-foo            No description provided 
 ‘-prepobs        obs preprocessing


PIC

PIC

PIC

PIC

PIC

PIC


Figure 32: Graphs of the namespaces example suite showing various states of expansion of the nested namespace family hierarchy, from all families collapsed (top left) through to all expanded (bottom right). This can also be done by right-clicking on tasks in the suite control GUI graph view.


8.6 Jinja2 Suite Templates

Support for the Jinja2 template processor adds general variables, mathematical expressions, loop control structures, and conditional expressions to suite.rc files - which are automatically preprocessed to generate the final suite definition seen by cylc.

The need for Jinja2 processing must be declared with a hash-bang comment as the first line of the suite.rc file:

#!Jinja2 
# ...

Potential uses for this include automatic generation of repeated groups of similar tasks and dependencies, and inclusion or exclusion of entire suite sections according to the value of a single flag. Consider a large complicated operational suite and several related parallel test suites with slightly different task content and structure (the parallel suites, for instance, might take certain large input files from the operation or the archive rather than downloading them again) - these can now be maintained as a single master suite definition that reconfigures itself according to the value of a flag variable indicating the intended use.

Template processing is the first thing done on parsing a suite definition so Jinja2 expressions can appear anywhere in the file (inside strings and namespace headings, for example).

Jinja2 is well documented at http://jinja.pocoo.org/docs, so here we just provide an example suite that uses it. The meaning of the embedded Jinja2 code should be reasonably self-evident to anyone familiar with standard programming techniques.


PIC


Figure 33: The Jinja2 ensemble example suite graph.


The jinja2.ensemble example, graphed in Figure 33, shows an ensemble of similar tasks generated using Jinja2:

#!jinja2 
{% set N_MEMBERS = 5 %} 
[scheduling] 
    [[dependencies]] 
        graph = """{# generate ensemble dependencies #} 
            {% for I in range( 0, N_MEMBERS ) %} 
               foo => mem_{{ I }} => post_{{ I }} => bar 
            {% endfor %}"""

Here is the generated suite definition, after Jinja2 processing:

#!jinja2 
[scheduling] 
    [[dependencies]] 
        graph = """ 
          foo => mem_0 => post_0 => bar 
          foo => mem_1 => post_1 => bar 
          foo => mem_2 => post_2 => bar 
          foo => mem_3 => post_3 => bar 
          foo => mem_4 => post_4 => bar 
                """

And finally, the jinja2.cities example uses variables, includes or excludes special cleanup tasks according to the value of a logical flag, and it automatically generates all dependencies and family relationships for a group of tasks that is repeated for each city in the suite. To add a new city and associated tasks and dependencies simply add the city name to list at the top of the file. The suite is graphed, with the New York City task family expanded, in Figure 34.

 
#!Jinja2 
 
title = "Jinja2 city suite example." 
description = """ 
Illustrates use of variables and math expressions, and programmatic 
generation of groups of related dependencies and runtime properties.""" 
 
{% set HOST = "SuperComputer" %} 
{% set CITIES = 'NewYork', 'Philadelphia', 'Newark', 'Houston', 'SantaFe', 'Chicago' %} 
{% set CITYJOBS = 'one', 'two', 'three', 'four' %} 
{% set LIMIT_MINS = 20 %} 
 
{% set CLEANUP = True %} 
 
[scheduling] 
    [[ dependencies ]] 
{% if CLEANUP %} 
        [[[23]]] 
            graph = "clean" 
{% endif %} 
        [[[0,12]]] 
            graph = """ 
                    setup => get_lbc & get_ic # foo 
{% for CITY in CITIES %} {# comment #} 
                    get_lbc => {{ CITY }}_one 
                    get_ic => {{ CITY }}_two 
                    {{ CITY }}_one & {{ CITY }}_two => {{ CITY }}_three & {{ CITY }}_four 
{% if CLEANUP %} 
                    {{ CITY }}_three & {{ CITY }}_four => cleanup 
{% endif %} 
{% endfor %} 
                    """ 
[runtime] 
    [[on_{{ HOST }} ]] 
        [[[remote]]] 
            host = {{ HOST }} 
            # (remote cylc directory is set in site/user config for this host) 
        [[[directives]]] 
            wall_clock_limit = "00:{{ LIMIT_MINS|int() + 2 }}:00,00:{{ LIMIT_MINS }}:00" 
 
{% for CITY in CITIES %} 
    [[ {{ CITY }} ]] 
        inherit = on_{{ HOST }} 
{% for JOB in CITYJOBS %} 
    [[ {{ CITY }}_{{ JOB }} ]] 
        inherit = {{ CITY }} 
{% endfor %} 
{% endfor %} 
 
[visualization] 
    initial cycle time = 2011080812 
    final cycle time = 2011080823 
    [[node groups]] 
        cleaning = clean, cleanup 
    [[node attributes]] 
        cleaning = 'style=filled', 'fillcolor=yellow' 
        NewYork = 'style=filled', 'fillcolor=lightblue'


PIC


Figure 34: The Jinja2 cities example suite graph, with the New York City task family expanded.


8.6.1 Accessing Environment Variables With Jinja2

This functionality is not provided by Jinja2 by default, but cylc automatically imports the user environment to the template in a dictionary structure called environ. A usage example:

#!Jinja2 
#... 
[runtime] 
    [[root]] 
        [[[environment]]] 
            SUITE_OWNER_HOME_DIR_ON_SUITE_HOST = {{environ['HOME']}}

This example is emphasizes that the environment is read on the suite host at the time the suite definition is parsed - it is not, for instance, read at task run time on the task host.

8.6.2 Custom Jinja2 Filters

Jinja2 variable values can be modified by “filters”, using pipe notation. For example, the built-in trim filter strips leading and trailing white space from a string:

{% set MyString = "   dog   " %} 
{{ MyString | trim() }}  # "dog"

(See official Jinja2 documentation for available built-in filters.)

Cylc also supports custom Jinja2 filters. A custom filter is a single Python function in a source file with the same name as the function (plus “.py” extension) and stored in one of the following locations:

In the filter function argument list, the first argument is the variable value to be “filtered”, and subsequent arguments can be whatever is needed. Currently there is one custom filter called “pad” in the central cylc Jinja2 filter directory, for padding string values to some constant length with a fill character - useful for generating task names and related values in ensemble suites:

{% for i in range(0,100) %}  # 0, 1, ..., 99 
    {% set j = i | pad(2,'0') %} 
    A_{{j}}          # A_00, A_01, ..., A_99 
{% endfor %}

8.6.3 Associative Arrays In Jinja2

Associative arrays (dicts in Python) can be very useful. Here’s an example, from $CYLC_DIR/examples/jinja2/dict:

#!Jinja2 
{% set obs_types = ['airs', 'iasi'] %} 
{% set resource = { 'airs':'ncpus=9', 'iasi':'ncpus=20' } %} 
 
[scheduling] 
    [[dependencies]] 
        graph = "obs" 
[runtime] 
    [[obs]] 
        [[[job submission]]] 
            method = pbs 
    {% for i in obs_types %} 
    [[ {{i}} ]] 
        inherit = obs 
        [[[directives]]] 
             -I = {{ resource[i] }} 
     {% endfor %}

Here’s the result:

% cylc get-config -i [runtime][airs]directives SUITE 
-I = ncpus=9

8.6.4 Jinja2 Default Values And Template Inputs

The values of Jinja2 variables can be passed in from the cylc command line rather than hardwired in the suite definition. Here’s an example, from $CYLC_DIR/examples/jinja2/defaults:

#!Jinja2 
 
title = "Jinja2 example: use of defaults and external input" 
 
description = """ 
The template variable FIRST_TASK must be given on the cylc command line 
using --set or --set-file=FILE; two other variables, LAST_TASK and 
N_MEMBERS can be set similarly, but if not they have default values.""" 
 
{% set LAST_TASK = LAST_TASK | default( 'baz' ) %} 
{% set N_MEMBERS = N_MEMBERS | default( 3 ) | int %} 
 
{# input of FIRST_TASK is required - no default #} 
 
[scheduling] 
    initial cycle time = 2010080800 
    final cycle time   = 2010081600 
    [[dependencies]] 
        [[[0]]] 
            graph = """{{ FIRST_TASK }} => ens 
                 ens:succeed-all => {{ LAST_TASK }}""" 
[runtime] 
    [[ens]] 
{% for I in range( 0, N_MEMBERS ) %} 
    [[ mem_{{ I }} ]] 
        inherit = ens 
{% endfor %}

Here’s the result:

% cylc list SUITE 
Jinja2 Template Error 
'FIRST_TASK' is undefined 
cylc-list -t foo  failed:  1 
 
% cylc list --set FIRST_TASK=bob foo 
bob 
baz 
mem_2 
mem_1 
mem_0 
 
% cylc list --set FIRST_TASK=bob --set LAST_TASK=alice foo 
bob 
alice 
mem_2 
mem_1 
mem_0 
 
list --set FIRST_TASK=bob --set N_MEMBERS=10 foo 
mem_9 
mem_8 
mem_7 
mem_6 
mem_5 
mem_4 
mem_3 
mem_2 
mem_1 
mem_0 
baz 
bob

Note also that cylc view --set FIRST_TASKbob –jinja2 SUITE= will show the suite with the Jinja2 variables as set.

Warning: suites started with template variables set on the command line do not currently restart with the same settings - you have to set them again on the cylc restart command line.

8.7 Special Placeholder Variables

Several special variables are used as placeholders in cylc suite definitions:

To use proper variables (c.f. programming languages) in suite definitions, see the Jinja2 template processor (Section 8.6).

8.8 Omitting Tasks At Runtime

It is sometimes convenient to omit certain tasks from the suite at runtime without actually deleting their definitions from the suite.

Defining [runtime] properties for tasks that do not appear in the suite graph results in verbose-mode validation warnings that the tasks are disabled. They cannot be used because the suite graph is what defines their dependencies and valid cycle times. Nevertheless, it is legal to leave these orphaned runtime sections in the suite definition because it allows you to temporarily remove tasks from the suite by simply commenting them out of the graph.

To omit a task from the suite at runtime but still leave it fully defined and available for use (by insertion or cylc submit) use one or both of [scheduling][[special task]] lists, include at start-up or exclude at start-up (documented in Sections A.3.5.8 and A.3.5.7). Then the graph still defines the validity of the tasks and their dependencies, but they are not actually inserted into the suite at start-up. Other tasks that depend on the omitted ones, if any, will have to wait on their insertion at a later time or otherwise be triggered manually.

Finally, with Jinja2 (Section 8.6) you can radically alter suite structure by including or excluding tasks from the [scheduling] and [runtime] sections according to the value of a single logical flag defined at the top of the suite.

8.9 Naked Dummy Tasks And Strict Validation

A naked dummy task appears in the suite graph but has no explicit runtime configuration section. Such tasks automatically inherit the default “dummy task” configuration from the root namespace. This is very useful because it allows functional suites to be mocked up quickly for test and demonstration purposes by simply defining the graph. It is somewhat dangerous, however, because there is no way to distinguish an intentional naked dummy task from one generated by typographic error: misspelling a task name in the graph results in a new naked dummy task replacing the intended task in the affected trigger expression; and misspelling a task name in a runtime section heading results in the intended task becoming a dummy task itself (by divorcing it from its intended runtime config section).

To avoid this problem any dummy task used in a real suite should not be naked - i.e. it should have an explicit entry in under the runtime section of the suite definition, even if the section is empty. This results in exactly the same dummy task behaviour, via implicit inheritance from root, but it allows use of cylc validate --strict to catch errors in task names by failing the suite if any naked dummy tasks are detected.

9 Task Implementation

 9.1 Most Tasks Require No Modification For Cylc
 9.2 Suite.rc Inlined Tasks
 9.3 Voluntary Messaging Modifications
 9.4 Tasks Requiring Modification For Cylc

This section lays out the minimal requirements on external commands, scripts, or executables invoked by cylc to carry out task processing.

9.1 Most Tasks Require No Modification For Cylc

Any existing command, script, or executable can function as a cylc task (or rather, perform the external processing that the task represents), with the following conditions excepions:

If these requirements are not met, see Tasks Requiring Modification For Cylc, Section 9.4.

The following suite runs a couple of external scripts that are not cylc-aware, but which meet the requirements above so that no special treatment is required at all:

# SUITE.RC 
[runtime] 
    [[foo]] 
        description = a task that runs foo.sh 
        command scripting = foo.sh OPTIONS ARGUMENTS 
    [[bar]] 
        description = a task that runs bar.sh 
        command scripting = """echo HELLO 
                               bar.sh 
                               echo BYE""" 
    [[baz]] 
        description = a task that runs baz.sh and retries on failure 
        retry delays = 3⋆1 # or 1,1,1 
        command scripting = """echo attempt No. $CYLC_TASK_TRY_NUMBER 
                               baz.sh"""

9.2 Suite.rc Inlined Tasks

Simple tasks can be entirely implemented within the suite.rc file because the task command scripting string can contain as many lines of code as you like.

9.3 Voluntary Messaging Modifications

You can, if you like, modify task scripts to send explanatory or progress messages to the suite as the task runs. For example, a task can send a priority critical message before aborting on error:

#!/bin/bash 
set -e  # abort on error 
if ! mkdir /illegal/dir; then 
    # (use inline error checking to avoid triggering the above 'set -e') 
    cylc task message -p CRITICAL "Failed to create directory /illegal/dir" 
    exit 1 # now abort non-zero exit status to trigger the task failed message 
fi

You can also use this syntax:

#!/bin/bash 
set -e 
mkdir /illegal/dir || {  # inline error checking using OR operator 
    cylc task message -p CRITICAL "Failed to create directory /illegal/dir" 
    exit 1 
}

But not this:

#!/bin/bash 
set -e 
mkdir /illegal/dir  # aborted via 'set -e' 
if [[ $? != 0 ]]; then  # so this will never be reached. 
    cylc task message -p CRITICAL "Failed to create directory /illegal/dir" 
    exit 1 
fi

You can also send warning messages, or general information:

#!/bin/bash 
# a warning message (this will be logged by the suite): 
cylc task message -p WARNING "oops, something's fishy here" 
# information (this will also be logged by the suite): 
cylc task message "Hello from task foo"

This may be useful - any message received from a task is logged by cylc - but it is not a requirement. If error messages are not reported, for instance, detected task failures will still be registered, and task stdout and stderr logs can still be examined for evidence of what went wrong.

9.4 Tasks Requiring Modification For Cylc

There are two main categories of tasks that require some modification to work with cylc: those with internal (pre-completion) outputs that other tasks need to trigger off, and those that spawn detaching internal processes prior to exiting before the spawned processing is finished. Additionally, any task that fails to indicate a fatal error by return non-zero exit status must be corrected.

9.4.1 Returning Non-zero Exit Status On Error

The requirement to abort with non-zero exit status on error (which should be normal scripting practice in any case) allows the task job script to trap errors and send a cylc task failed message to alert the suite. You can use set -e to avoid writing explicit error checks for every operation:

#!/bin/bash 
set -e  # abort on error 
mkdir /illegal/dir  # this error will abort the script with non-zero exit status

See Section 9.3, Voluntary Messaging Modifications, for more examples of error detection and cylc messaging.

9.4.2 Reporting Internal Outputs Completed

If a task has internal outputs that others can trigger off before it finishes, then it must report completion of those outputs with messages sent back to the suite at appropriate times. Output messages must be unique within the suite or else downstream tasks will trigger off whichever task happens to send the message first; they must exactly match the corresponding outputs registered for the task because cylc distinguishes between registered outputs that others can trigger off and general messages that are just logged; and for cycling tasks they must contain the cycle time in order to distinguish between the outputs of successive instances of the same task.

The “outputs” example is a self-contained suite that illustrates use of internal outputs:

 
title = "triggering off internal task outputs" 
 
description = """ 
This is a self contained example (task implementation, including output 
messaging, is entirely contained within the suite definition).""" 
 
[scheduling] 
    initial cycle time = 2010080806 
    final cycle time = 2010080812 
    [[dependencies]] 
        [[[0,12]]] 
          graph = """ 
            foo:out1 => bar 
            foo:out2 => baz 
                  """ 
[runtime] 
    [[foo]] 
        command scripting = """ 
echo HELLO 
sleep 10 
# use task runtime environment variables here 
cylc message "foo uploaded file set 1 for $CYLC_TASK_CYCLE_TIME" 
sleep 10 
cylc message "foo $CYLC_TASK_NAME uploaded file set 2 for $CYLC_TASK_CYCLE_TIME" 
sleep 10 
echo BYE""" 
        [[[outputs]]] 
            # use cylc placeholder variables here 
            out1 = "foo uploaded file set 1 for [T]" 
            out2 = "foo uploaded file set 2 for [T]"

Note that cycle time in the output message registration is expressed by a special placeholder variable (see Section 8.7) not by the corresponding environment variable. This is because registered output messages are held by task proxies, inside cylc, for comparison with incoming task messages; they are never interpreted by the shell and consequently may not contain environment variables. The actual messaging calls made by running tasks, on the other hand, can make use of variables in the task runtime environment.

9.4.3 Reconnecting Detaching Processes

You may be able to convert a detaching task to a non-detaching task very easily. If the detaching process is just a background shell process, for instance, run it in the foreground instead; for loadleveler the -s option prevents llsubmit from returning until the job has completed; for Sun Grid Engine, qsub -sync yes has the same effect. Section 10.4 shows how to override the command template used by cylc in order to customize job submission command options like this.

9.4.4 Tasks That Detach And Exit Early

Tasks with processes that spawn jobs internally (e.g. to a batch queue scheduler or to another host) and then detach and exit without seeing the resulting processing through must arrange for the spawned processing to send its own “cylc task succeeded” or “cylc task failed” messages on completion, because the cylc-generated job script (Section 10.1) that otherwise arranges for automatic completion messaging cannot know when the task is really finished.

First check that you can’t easily “reconnect” the detaching internal processes, as described above in Section 9.4.3. If not, then start by disabling cylc’s automatic completion messaging:

# SUITE.RC 
[runtime] 
    [[root]] 
        manual completion = True   # global setting 
    [[foo]] 
        manual completion = False  # task-specific setting

Now, reporting success or failure is just a matter of calling the cylc messaging commands:

#!/bin/bash 
# ... 
if $SUCCESS; then 
    # release my task lock and report success 
    cylc task succeeded 
    exit 0 
else 
    # release my task lock and report failed 
    cylc task failed "Input file X not found" 
    exit 1 
fi

Bear in mind, however, that cylc messaging commands read environment variables that identify the calling task and the target suite, so if your job submission method does not automatically copy its parent environment you must arrange for these variables, at the least, to be propagated through to your spawned sub-jobs.

One way to handle this is to write a task wrapper that modifies a copy of the detaching native job scripts, on the fly, to insert completion messaging in the appropriate places, and other variables if necessary, before invoking the (now modified) native process. A significant advantage of this method is that you don’t need to permanently modify the model or its associated native scripting for cylc. Another is that you can configure the native job setup for a single test case (running it without cylc) and then have your custom wrapper modify the test case on the fly with suite, task, and cycle-specific parameters as required.

To make this easier, for tasks that declare manual completion messaging, cylc makes non user-defined environment scripting available in a single variable called $CYLC_SUITE_ENVIRONMENT that can be inserted into the aforementioned native task scripts prior to calling the cylc messaging commands.8

9.4.5 A Custom Task Wrapper Example

The detaching example suite contains a script model.sh that runs a pseudo-model executable as follows:

 
#!/bin/bash 
set -e 
 
MODEL="sleep 10; true" 
#MODEL="sleep 10; false"  # uncomment to test model failure 
 
echo "model.sh: executing pseudo-executable" 
eval $MODEL 
echo "model.sh: done"

this is in turn executed by a script run-model.sh that detaches immediately after job submission (i.e. it exits before the model executable actually runs):

 
#!/bin/bash 
set -e 
echo "run-model.sh: submitting model.sh to 'at now'" 
SCRIPT=model.sh  # location of the model job to submit 
OUT=$1; ERR=$2   # stdout and stderr log paths 
# submit the job and detach 
 
MY_TMPDIR=${CYLC_TMPDIR:-${TMPDIR:-/tmp}} 
 
RES=$MY_TMPDIR/atnow$$.txt 
( at now <<EOF 
$SCRIPT 1> $OUT 2> $ERR 
EOF 
) > $RES 2>&1 
if grep 'No atd running' $RES; then 
    echo 'ERROR: atd is not running!' 
    exit 1 
fi 
# model.sh should now be running at the behest of the 'at' scheduler. 
echo "run-model.sh: done"

Note that your at scheduler daemon must be up if you want to test this suite.

Here’s a cylc suite to run this unruly model:

 
title = "Cylc User Guide Custom Task Wrapper Example" 
 
description = """This suite runs a single task that internally submits a 
'model executable' before detaching and exiting immediately - so we have 
to handle task completion messaging manually - see the Cylc User Guide.""" 
 
[scheduling] 
    initial cycle time = 2011010106 
    final cycle time = 2011010200 
    [[special tasks]] 
        sequential = model 
    [[dependencies]] 
        [[[0,6,12,18]]] 
        graph = "model" 
 
[runtime] 
    [[model]] 
        manual completion = True 
        command scripting = model-wrapper.sh  # invoke the task via a custom wrapper 
        [[[environment]]] 
            # location of native job scripts to modify for this suite: 
            NATIVESCRIPTS = $CYLC_SUITE_DEF_PATH/native 
            # output path prefix for detached model stdout and stderr: 
            PREFIX = $HOME/detach 
            FOO = "$HOME bar $PREFIX"

The suite invokes the task by means of the custom wrapper model-wrapper.sh which modifies, on the fly, a temporary copy of the model’s native job scripts as described above:

 
#/bin/bash 
set -e 
 
# A custom wrapper for the 'model' task from the detaching example suite. 
# See the Cylc User Guide for more information. 
 
# Check inputs: 
# location of pristine native job scripts: 
cylc util checkvars -d NATIVESCRIPTS 
# path prefix for model stdout and stderr: 
cylc util checkvars PREFIX 
 
MY_TMPDIR=${CYLC_TMPDIR:-${TMPDIR:-/tmp}} 
# Get a temporary copy of the native job scripts: 
TDIR=$MY_TMPDIR/detach$$ 
mkdir -p $TDIR 
cp $NATIVESCRIPTS/⋆ $TDIR 
 
# Insert task-specific execution environment in $TDIR/model.sh: 
SRCH='echo "model.sh: executing pseudo-executable"' 
perl -pi -e "s@^${SRCH}@${CYLC_SUITE_ENVIRONMENT}\n${SRCH}@" $TDIR/model.sh 
 
# Task completion message scripting. Use single quotes here - we don't 
# want the $? variable to evaluate in this shell! 
MSG=' 
if [[ $? != 0 ]]; then 
   cylc task message -p CRITICAL "ERROR: model executable failed" 
   exit 1 
else 
   cylc task succeeded 
   exit 0 
fi' 
# Insert error detection and cylc messaging in $TDIR/model.sh: 
SRCH='echo "model.sh: done"' 
perl -pi -e "s@^${SRCH}@${MSG}\n${SRCH}@" $TDIR/model.sh 
 
# Point to the temporary copy of model.sh, in run-model.sh: 
SRCH='SCRIPT=model.sh' 
perl -pi -e "s@^${SRCH}@SCRIPT=$TDIR/model.sh@" $TDIR/run-model.sh 
 
# Execute the (now modified) native process: 
$TDIR/run-model.sh ${PREFIX}-${CYLC_TASK_CYCLE_TIME}-$$.out ${PREFIX}-${CYLC_TASK_CYCLE_TIME}-$$.err 
 
echo "model-wrapper.sh: see modified job scripts under ${TDIR}!" 
# EOF

If you run this suite, or submit the model task alone with cylc submit, you’ll find that the usual job submission log files for task stdout and stderr end before the task is finished. To see the “model” output and the final task completion message (success or failure), examine the log files generated by the job submitted internally to the at scheduler (their location is determined by the $PREFIX variable in the suite.rc file).

It should not be difficult to adapt this example to real tasks with detaching internal job submission. You will probably also need to replace other parameters, such as model input and output filenames, with suite- and cycle-appropriate values, but exactly the same technique can be used: identify which job script needs to be modified and use text processing tools (such as the single line perl search-and-replace expressions above) to do the job.

10 Task Job Submission

 10.1 Task Job Scripts
 10.2 Built-in Job Submission Methods
 10.3 Task stdout and stderr Logs
 10.4 Overriding the Job Submission Command
 10.5 Defining New Job Submission Methods

Task Implementation (Section 9) describes what requirements a command, script, or program, must fulfill in order to function as a cylc task. This section explains how tasks are submitted by cylc when they are ready to run, and how to define new task job submission methods.

10.1 Task Job Scripts

When a task is ready to run cylc generates a temporary task job script to configure the task’s execution environment and call its command scripting. The job script is the embodiment of all suite.rc runtime settings for the task. It is submitted to run by the job submission method configured for the task. Different tasks can have different job submission methods. Like other runtime properties, you can set a suite default job submission method and override it for specific tasks or families:

# SUITE.RC 
[runtime] 
   [[root]] # suite defaults 
        [[[job submission]]] 
            method = loadleveler 
   [[foo]] # just task foo 
        [[[job submission]]] 
            method = at

The actual command line used to submit the job script is written to stdout by cylc. In the following shell transcript we generate a job script for a task in the examples.QuickStart.c example suite and then examine it:

% cylc submit --dry-run examples.QuickStart.c Model.2011080506 
> JOB SCRIPT: ~/cylc-run/examples.QuickStart.c/log/job/Model.2011080506.1 
> THIS IS A DRY RUN. HERE'S HOW I WOULD SUBMIT THE TASK: 
~/cylc-run/examples.QuickStart.c/log/job/Model.2011080506.1 </dev/null 
    1> ~/cylc-run/examples.QuickStart.c/log/job/Model.2011080506.1.out 
    2> ~/cylc-run/examples.QuickStart.c/log/job/Model.2011080506.1.err &

Here is the generated job script (note that some config items not used in this suite, such as task initial scripting , result in extra job script sections not shown here):

#!/bin/bash 
 
# ++++ THIS IS A CYLC TASK JOB SCRIPT ++++ 
# Task: foo.2010080800 
# To be submitted by method: 'background' 
 
echo "JOB SCRIPT STARTING" 
 
# CYLC LOCATION; SUITE LOCATION, IDENTITY, AND ENVIRONMENT: 
export CYLC_DIR_ON_SUITE_HOST=/home/oliverh/cylc 
export CYLC_MODE=submit 
export CYLC_DEBUG=False 
export CYLC_VERBOSE=False 
export CYLC_SUITE_HOST=oliverh-33586DL.greta.niwa.co.nz 
export CYLC_SUITE_PORT=None 
export CYLC_SUITE_REG_NAME=baz 
export CYLC_SUITE_REG_PATH=baz 
export CYLC_SUITE_OWNER=oliverh 
export CYLC_USE_LOCKSERVER=False 
export CYLC_UTC=False 
export CYLC_SUITE_INITIAL_CYCLE_TIME=2010080800 
export CYLC_SUITE_FINAL_CYCLE_TIME=None 
export CYLC_SUITE_DEF_PATH_ON_SUITE_HOST=/home/oliverh/cylc/baz 
export CYLC_SUITE_DEF_PATH=$HOME/cylc/baz 
export CYLC_SUITE_PYRO_TIMEOUT=None 
 
# CYLC TASK IDENTITY AND ENVIRONMENT: 
export CYLC_TASK_ID=foo.2010080800 
export CYLC_TASK_NAME=foo 
export CYLC_TASK_CYCLE_TIME=2010080800 
export CYLC_TASK_LOG_ROOT=$HOME/cylc-run/baz/log/job/foo.2010080800.1 
export CYLC_TASK_NAMESPACE_HIERARCHY="root foo" 
export CYLC_TASK_TRY_NUMBER=1 
export CYLC_TASK_SSH_MESSAGING=False 
export CYLC_TASK_WORK_PATH=$CYLC_SUITE_DEF_PATH/work/$CYLC_TASK_ID 
# Note the suite share path may actually be family- or task-specific: 
export CYLC_SUITE_SHARE_PATH=$CYLC_SUITE_DEF_PATH/share 
 
# ACCESS TO CYLC: 
PATH=/home/oliverh/cylc/bin:$PATH 
 
# ACCESS TO THE SUITE BIN DIRECTORY: 
PATH=$CYLC_SUITE_DEF_PATH/bin:$PATH 
export PATH 
 
# TASK RUNTIME ENVIRONMENT: 
FOO=BAR 
 
# SET ERROR TRAPPING: 
set -u # Fail when using an undefined variable 
# Define the trap handler 
HANDLE_TRAP() { 
  echo Received signal "$@" 
  cylc task failed "Task job script received signal $@" 
  trap "" EXIT 
  exit 0 
} 
# Trap signals that could cause this script to exit: 
trap "HANDLE_TRAP EXIT" EXIT 
trap "HANDLE_TRAP ERR"  ERR 
trap "HANDLE_TRAP TERM" TERM 
trap "HANDLE_TRAP XCPU" XCPU 
 
# INITIAL SCRIPTING: 
echo Hello World 
 
# SEND TASK STARTED MESSAGE: 
cylc task started 
 
# SHARE DIRECTORY CREATE: 
mkdir -p $CYLC_SUITE_SHARE_PATH || true 
 
# WORK DIRECTORY CREATE: 
mkdir -p $(dirname $CYLC_TASK_WORK_PATH) || true 
mkdir -p $CYLC_TASK_WORK_PATH 
cd $CYLC_TASK_WORK_PATH 
 
# TASK IDENTITY SCRIPTING: 
echo "cylc Suite and Task Identity:" 
echo "  Suite Name  : $CYLC_SUITE_REG_NAME" 
echo "  Suite Host  : $CYLC_SUITE_HOST" 
echo "  Suite Port  : $CYLC_SUITE_PORT" 
echo "  Suite Owner : $CYLC_SUITE_OWNER" 
echo "  Task ID     : $CYLC_TASK_ID" 
if [[ $(uname) == AIX ]]; then 
   # on AIX the hostname command has no '-f' option 
   echo "  Task Host   : $(hostname).$(namerslv -sn | awk '{print $2}')" 
else 
    echo "  Task Host   : $(hostname -f)" 
fi 
echo "  Task Owner  : $USER" 
echo "  Task Try No.: $CYLC_TASK_TRY_NUMBER" 
echo "" 
 
# TASK COMMAND SCRIPTING: 
echo Dummy command scripting; sleep 10 
 
# EMPTY WORK DIRECTORY REMOVE: 
cd 
rmdir $CYLC_TASK_WORK_PATH 2>/dev/null || true 
 
# SEND TASK SUCCEEDED MESSAGE: 
cylc task succeeded 
 
echo "JOB SCRIPT EXITING (TASK SUCCEEDED)" 
trap "" EXIT 
 
#EOF

You can also generate a job script and print it directly to stdout with cylc jobscript SUITE TASKID.

10.2 Built-in Job Submission Methods

Cylc has a number of built-in job submission methods. If these do not suite your needs Section 10.5 shows how to extend cylc with new user-defined job submission methods.

There are also two job submission methods intended for use in cylc development and testing: background_slow, in which the job submission subprocess itself takes ten seconds to complete before the task starts executing; and fail, in which the job submission subprocess itself always fails.

10.2.1 background

This job submission method runs tasks directly in a background shell.

10.2.2 at

This job submission method submits tasks to the rudimentary Unix at scheduler. The atd daemon must be running.

10.2.3 loadleveler

This job submission method submits tasks to loadleveler using the llsubmit command. Loadleveler directives can be provided in the suite.rc file:

# SUITE.RC 
[runtime] 
    [[__NAME__]] 
        [[[directives]]] 
            foo = bar 
            baz = qux

These are written to the top of the task job script like this:

#!/bin/bash 
# DIRECTIVES 
# @ foo = bar 
# @ baz = qux 
# @ queue

10.2.4 pbs

This job submission method submits tasks to PBS (or Torque) using the qsub command. PBS directives can be provided in the suite.rc file:

# SUITE.RC 
[runtime] 
    [[__NAME__]] 
        [[[directives]]] 
            -q = foo 
            -l = 'nodes=1,walltime=00:01:00'

These are written to the top of the task job script like this:

#!/bin/bash 
# DIRECTIVES 
#PBS -q foo 
#PBS -l nodes=1,walltime=00:01:00

10.2.5 sge

This job submission method submits tasks to Sun Grid Engine using the qsub command. SGE directives can be provided in the suite.rc file:

# SUITE.RC 
[runtime] 
    [[__NAME__]] 
        [[[directives]]] 
            -cwd = ' ' 
            -q = foo 
            -l = 'h_data=1024M,h_rt=24:00:00'

These are written to the top of the task job script like this:

#!/bin/bash 
# DIRECTIVES 
#$ -cwd 
#$ -q foo 
#$ -l h_data=1024M,h_rt=24:00:00

10.2.6 Default Directives Provided

For loadleveler, pbs, and sge job submission, default directives are provided to set the job name and stdout and stderr file paths.

10.2.7 PBS and SGE Cylc Quirks

As shown in the example above, multiple entries for the same PBS or SGE directive option must be comma-separated on the same line, in the suite.rc file. Otherwise, repeating the option on another line will override the previous entry, not add to it. Also, the right-hand side must be quoted to hide the comma from the suite.rc parser (commas indicate list values, whereas directives are treated as singular).

As also shown in the example above, to get a naked option flag such as -cwd in SGE you must give a quoted blank space as the directive value in the suite.rc file.

10.3 Task stdout and stderr Logs

When a task is ready to run cylc generates a filename root to be used for the task job script and log files. The filename containing the task name, cycle time (or integer tag), and a submit number that increments if the same task is re-triggered multiple times:

# task job script: 
~/cylc-run/examples.QuickStart.c/log/job/Model.2011080506.1 
# task stdout: 
~/cylc-run/examples.QuickStart.c/log/job/Model.2011080506.1.out 
# task stderr: 
~/cylc-run/examples.QuickStart.c/log/job/Model.2011080506.1.err

How the stdout and stderr streams are directed into these files depends on the job submission method. The background method just uses appropriate output redirection on the command line, as shown above. The loadleveler method writes appropriate directives to the job script that is submitted to loadleveler.

Cylc obviously has no control over the stdout and stderr output from tasks that do their own internal output management (e.g. tasks that submit internal jobs and direct the associated output to other files). For less internally complex tasks, however, the files referred to here will be complete task job logs.

Cylc’s suite control GUIs can display the task job logs (updating in real time for local tasks).

10.4 Overriding the Job Submission Command

To change the form of the actual command used to submit a job you do not need to define a new job submission method; just override the command template in the relevant job submission sections of your suite.rc file:

# SUITE.RC 
[runtime] 
    [[root]] 
        [[[job submission]]] 
            method = loadleveler 
            # Use '-s' to stop llsubmit returning until all job steps have completed: 
            command template = llsubmit -s %s

As explained in the suite.rc reference (Appendix A), the template’s first %s will be substituted by the job file path and, where applicable a second and third %s will be substituted by the paths to the job stdout and stderr files.

10.5 Defining New Job Submission Methods

Defining a new job submission method requires some minimal Python programming. You can derive (in the sense of object oriented programming inheritance) new methods from one of the existing ones, or directly from cylc’s job submission base class,

$CYLC_DIR/lib/cylc/job_submission/job_submit.py

using the existing methods as examples. Most often this should merely be a matter of defining the command line used to execute the aforementioned job scripts and using the provided stdout and stderr file paths appropriately. For example, here is the entire class code for the background method:

#!/usr/bin/env python 
 
from job_submit import job_submit 
 
class background( job_submit ): 
    """ 
Run the task job script directly in a background shell. 
    """ 
    # stdin redirection (< /dev/null) allows background execution on 
    # remote hosts - ssh needn't wait for the process to finish. 
    COMMAND_TEMPLATE = "%s </dev/null 1>%s 2>%s &" 
    def construct_jobfile_submission_command( self ): 
        command_template = self.job_submit_command_template 
        if not command_template: 
            command_template = self.COMMAND_TEMPLATE 
        self.command = command_template % ( self.jobfile_path, 
                                            self.stdout_file, 
                                            self.stderr_file )

Here is the at method:

#!/usr/bin/env python 
 
from job_submit import job_submit 
 
class at( job_submit ): 
    """ 
Submit the task job script to the simple 'at' scheduler. The 'atd' daemon 
service must be running. 
    """ 
    COMMAND_TEMPLATE = "echo \"%s 1>%s 2>%s\" | at now" 
 
    def construct_jobfile_submission_command( self ): 
        command_template = self.job_submit_command_template 
        if not command_template: 
            command_template = self.COMMAND_TEMPLATE 
        self.command = command_template % ( self.jobfile_path, 
                                            self.stdout_file, 
                                            self.stderr_file )

And here is the pbs method:

 
#!/usr/bin/env python 
from job_submit import job_submit 
 
class pbs( job_submit ): 
    """ 
PBS qsub job submission. 
    """ 
 
    COMMAND_TEMPLATE = "qsub %s" 
 
    def set_directives( self ): 
        self.directive_prefix = "#PBS" 
        self.final_directive  = None 
        self.directive_connector = " " 
 
        defaults = {} 
        defaults[ '-N' ] = self.task_id 
        defaults[ '-o' ] = self.stdout_file 
        defaults[ '-e' ] = self.stderr_file 
 
        # In case the user wants to override the above defaults: 
        for d in self.directives: 
            defaults[ d ] = self.directives[ d ] 
        self.directives = defaults 
 
    def construct_jobfile_submission_command( self ): 
        command_template = self.job_submit_command_template 
        if not command_template: 
            command_template = self.COMMAND_TEMPLATE 
        self.command = command_template % ( self.jobfile_path )

10.5.1 An Example

The following user-defined job submission class, called qsub, overrides the built-in pbs class to change the directive prefix from #PBS to #QSUB:

#!/usr/bin/env python 
 
# to import from outside of the cylc source tree: 
from cylc.job_submission.pbs import pbs 
# OR, from $CYLC_DIR/lib/cylc/job_submission 
# from pbs import pbs 
 
class qsub( pbs ): 
    """ 
This is a user-defined job submission method that overrides the '#PBS' 
directive prefix of the built-in pbs method. 
    """ 
    def set_directives( self ): 
        pbs.set_directives( self ) 
        # override the '#PBS' directive prefix 
        self.directive_prefix = "#QSUB"

To check that this works correctly save the new source file to qsub.py in one of the allowed locations (see just below), use it in a suite definition:

# SUITE.rc 
# $HOME/test/suite.rc 
[scheduling] 
    [[dependencies]] 
        graph = "a" 
[runtime] 
    [[root]] 
        [[[job submission]]] 
            method = qsub 
        [[[directives]]] 
            -I = bar=baz 
            -l = 'nodes=1,walltime=00:01:00' 
            -cwd = ' '

and generate a job script to see the resulting directives:

$ cylc db reg test $HOME/test 
$ cylc jobscript test a | grep QSUB 
#QSUB -e /home/oliverh/cylc-run/pbs/log/job/a.1.1.err 
#QSUB -l nodes=1,walltime=00:01:00 
#QSUB -o /home/oliverh/cylc-run/pbs/log/job/a.1.1.out 
#QSUB -N a.1 
#QSUB -I bar=baz 
#QSUB -cwd

10.5.2 Where To Put New Job Submission Modules

You new job submission class code should be saved to a file with the same name as the class (plus “.py” extension). It can reside in any of the following locations, depending on how generally useful the new method is and whether or not you have write-access to the cylc source tree:

Note that the form of the import statement at the top of the new user-defined Python module differs depending on whether or not the file is installed in the cylc source tree (see the comment at the top of the example file above).

11 Running Suites

 11.1 How Tasks And Control Commands Interact With Suites
 11.2 Restarting Suites
 11.3 Task States
 11.4 Determining The Suite Network Port
 11.5 Ensemble Suites, Job Submission, and Network Timeouts
 11.6 Internal Queues And The Runahead Limit
 11.7 Automatic Task Retry On Failure
 11.8 Suite And Task Event Handling
 11.9 Reloading The Suite Definition At Runtime
 11.10 Handling Job Preemption
 11.11 Runtime Settings Broadcast and Communication Between Tasks
 11.12 The Meaning And Use Of Initial Cycle Time
 11.13 The Simulation And Dummy Run Modes
 11.14 Automated Reference Test Suites

This section may be incomplete - please see also the Quick Start Guide (Section 6), command documentation (Section C), and play with the example suites.

11.1 How Tasks And Control Commands Interact With Suites

All interaction with a running suite by control commands, gcylc GUI instances, and messaging commands from running tasks, uses the Pyro network RPC (Remote Procedure Call) protocol. Each suite listens on its own network port; cylc starts grabbing ports at 7766 by default - this is configured in the site config file $CYLC_DIR/conf/site/site.rc, see cylc get-global-config --help.

However, if the Pyro network ports are blocked at your site you can get remote control commands and remote task messaging calls to re-invoke themselves on the suite host using passwordless ssh, so that the ultimate Pyro connection to the suite only occurs on the suite host.

11.1.1 Pyro Connections And Secure Passphrases

All Pyro connections to a suite require passphrase authentication. The passphrase is just an arbitary single line of text. A secure MD5 checksum is passed across the network, not the raw passphrase itself. A random suite passhprase is generated in the suite definition directory when a suite is registered, but you can create your own as you wish.

Passphrases currently have to be installed manually in any user account, local or remote, that hosts tasks from the suite, or from which you want to interact with the suite without using the ssh remote communication method. Legal passphrase locations, in order of preference, are:

  1. $CYLC_SUITE_DEF_PATH/passphrase
  2. $HOME/.cylc/SUITE_HOST/SUITE_OWNER/SUITE_NAME/passphrase
  3. $HOME/.cylc/SUITE_HOST/SUITE_NAME/passphrase
  4. $HOME/.cylc/SUITE_NAME/passphrase
  5. (or, optionally, given on the commandline with -p FILE)

The locations under $HOME/.cylc are suitable for remote suite control accounts - in which case the suite definition directory may not be known or accessible. On remote task hosts the suite definition directory defaults to the same location relative to $HOME as on the suite host; or the remote location can be specified explicitly in the suite definition.

11.1.2 Ssh-Pyro Connections

For ssh-based remote task messaging and suite control, ssh keys must be installed in the relevant accounts to allow passwordless ssh connections back to the suite host account. The suite passphrase is then only required on the suite host.

11.1.3 Choosing The Communication Method

The Pyro connection method is the default because it is simpler and more efficient, but if your remote task host does not have the right network ports open, and the IT staff are reluctant to change that, then the ssh alternative is available so long as your local system and network configurations do not also prevent ssh connections back to the suite host - in that case see Section 11.1.4 below.

To use the ssh-based method:

11.1.4 If You Cannot Use Pyro or Ssh Connections

It has come to our attention that some HPC facilities do not have any networking routing back out of the compute nodes, for security reasons and/or to avoid network chatter that could affect performance. Any kind of communication back to a cylc suite running on an external host would then presumably be impossible, but:

11.2 Restarting Suites

A restarted suite (see cylc restart --help) is initialized from a previous recorded suite state dump, so that it can carry on from wherever it got to before being shut down or killed.

Prior to cylc-5.0 any tasks recorded in the submitted or running states were automatically resubmitted on restarting, on the basis that they might not have completed while the suite was down and so should re-run just to be safe. Similarly, any tasks recorded as failed were automatically resubmitted on the basis that whatever caused them to fail might have been fixed while the suite was down.

Since cylc-5.0 we no longer assume anything on restarting - task proxies are now loaded in exactly their recorded states and it is up to users to intervene in restarted suites according to their knowledge of what happened to the real tasks at or after suite shutdown (e.g. by re-triggering task proxies shown in the running state if the corresponding real tasks are actually not still running).

11.3 Task States

As a suite runs its tasks (or rather the task proxies that represent the real tasks) may move through a number of defined states:

Finally, there is also a pseudo-state displayed by the gcylc graph view:

And a pseudo-state for task reset purposes:

11.4 Determining The Suite Network Port

All Pyro communication with a running suite requires knowing the network port on which the suite is listening. Running tasks know their own suite’s port number because the suite puts it in the task execution environment, but user commands must determine the port number somehow. Through to the 4.5.1 release, inclusive, cylc used port scanning to find the target port, but this could potentially cause delays on hosts with a large number of suites running at once. Consequently suites now write their own port number to $HOME/.cylc/ports/<SUITE> at start-up, and commands issued on the suite host read this file to get the port number. Commands issued on a remote host use passwordless ssh to the suite owner account to retrieve the port number. Alternatively, if passwordless ssh is not configured to the suite host the port number can be given on the command line with the --port option. If you accidentally delete a suite’s port file while the suite is still running, use cylc scan to determine the port number to use on the command line.

11.5 Ensemble Suites, Job Submission, and Network Timeouts

11.5.1 Parallel Submission Of Jobs Ready At The Same Time

Cylc now handles task job submission in a dedicated worker thread so that submission of many remote tasks at once does not impact cylc’s performance or responsiveness.

Further, for maximum efficiency, job submissions are batched inside the worker thread: batch members are submitted in parallel, and all members must complete (the job submission process, that is, not the submitted task) before the next batch is handled. There is a configurable delay between batches to avoid swamping the host system in the event that hundreds of tasks become ready at the same time. The default batch size of 10 and delay of 15 seconds can be overridden in suite definitions:

# SUITE.RC 
[cylc] 
    [[job submission]] 
        batch size = 50 
        delay between batches = 30

Here a 120 task ensemble, for example, would be submitted in two batches of 50 followed by one of 20, with a 30 second delay between batches.

11.5.2 Network Connection Timeouts

A connection timeout can be set in site/user config files (see Section 4.3) so that messaging commands cannot hang indefinitely if the suite is not responding (thie can be caused by suspending a suite with Ctrl-Z) thereby preventing the task from completing. The same can be done on the command line for other suite-connecting user commands, with the --pyro-timeout option.

11.6 Internal Queues And The Runahead Limit

Some cylc suites have the potential to generate too much activity at once by virtue of the fact that each task cycles independently constrained only by dependence on other tasks or by clock triggers. Quick-running tasks at the top of the dependency tree with no prerequisites and no clock-triggers (or when running far behind the clock) will spawn rapidly into the future if not constrained somehow. There are two issues to be aware of here: over-burdening task host resources by submitting too many tasks at once, and over-burdening cylc itself by letting the task pool become too big (when fast tasks spawn ahead of the pack cylc has to keep them around in the succeeded state until other tasks, which may depend on them, have caught up).

11.6.1 The Suite Runahead Limit

The runahead limit prevents the fastest tasks in a suite from getting too far ahead of the slowest ones. Cylc’s cycle-interleaving abilities make for generally efficient scheduling, but there is no great advantage in letting a few fast data retrieval tasks, say, get far ahead of the slower tasks because it is typically the tasks at the bottom of the dependency tree, which necessarily run last, that generate the final products.

#SUITE.RC 
[scheduling] 
    runahead limit = 48 # hours

A cycling task spawns its successor when it enters the submitted state or, for sequential tasks, when it finishes. If a newly spawned task’s cycle time is ahead of the oldest non-finished (succeeded or failed) task by more than the runahead limit it is put into the special runahead held state until other tasks catch up sufficiently.

In cylc-4.5.1 and earlier it was possible to stall a suite by setting the runahead limit too low; now however it is applied in such a way that it simply constrains the number of different cycles that can run at once. If the limit is set to the smallest interval between cycles, or less than that, just a single cycle will run at a time.

The default runahead limit is now set to twice the smallest of the cycling intervals of the suite’s cycling modules (i.e. the graph section headings). For a suite that has 1- and 24-hourly cylcing tasks the default limit will be 2 hours, so that two of the hourly cycles will run at once in between the 24-hourly cycles.

Succeeded and failed tasks are ignored when computing the runahead limit (but tasks that can’t run because they depend on a failed task are not ignored, of course).

11.6.2 Internal Queues

Large suites could potentially swamp the task host hardware or external batch queueing system, depending on the chosen job submission method, by submitting too many tasks at once. Cylc’s internal queues prevent this by limiting the number of tasks, within defined groups, that are active (submitted or running) at once.

A queue is defined by a name; a limit, which is the maximum number of active tasks allowed for the queue; and a list of member tasks, which are assigned by name to the queue.

Queue configuration is done under the [scheduling] section of the suite.rc file, not as part of the runtime namespace hierarchy, because like dependencies queues constrain when a task runs rather than what runs after it is submitted. When runtime family relationships and queues do coincide you can assign task family members en masse to queues by using the family name, as shown in the example suite listing below.

By default every task is assigned to a default queue, which by default has a zero limit (interpreted by cylc as no limit). To use a single queue for the whole suite just set the default queue limit:

#SUITE.RC 
[scheduling] 
    [[ queues]] 
        # limit the entire suite to 5 active tasks at once 
        [[[default]]] 
            limit = 5

To use other queues just name each one, set the limit, and assign member tasks:

#SUITE.RC 
[scheduling] 
    [[ queues]] 
        [[[q_foo]]] 
            limit = 5 
            members = foo, bar, baz

Any tasks not assigned to a particular queue will remain in the default queue. The queues example suite illustrates how queues work by running two task trees side by side (as seen in the graph GUI) each limited to 2 and 3 tasks respectively:

 
title = demonstrates internal queueing 
description = """ 
Two trees of tasks: the first uses the default queue set to a limit of 
two active tasks at once; the second uses another queue limited to three 
active tasks at once. Run via the graph control GUI for a clear view. 
              """ 
[scheduling] 
    [[queues]] 
        [[[default]]] 
            limit = 2 
        [[[foo]]] 
            limit = 3 
            members = n, o, p, fam2, u, v, w, x, y, z 
    [[dependencies]] 
        graph = """ 
            a => b & c => fam1:succeed-all => h & i & j & k & l & m 
            n => o & p => fam2:succeed-all => u & v & w & x & y & z 
                """ 
[runtime] 
    [[fam1,fam2]] 
    [[d,e,f,g]] 
        inherit = fam1 
    [[q,r,s,t]] 
        inherit = fam2

Note assignment of runtime task family members to queues using the family name.

11.7 Automatic Task Retry On Failure

See also Section A.4.1.7 in the Suite.rc Reference.

Tasks can be configured with a list of “retry delay” periods, in minutes, such that if a task fails it will go into a temporary retrying state and then automatically resubmit itself after the next specified delay period expires. A usage example is shown in the suite listed below under Suite And Task Event Handling, Section 11.8.

11.8 Suite And Task Event Handling

See also Sections A.2.7 and A.4.1.17 in the Suite.rc Reference.

Cylc can call nominated event handlers when certain suite or task events occur. This is intended to facilitate centralized alerting and automated handling of critical events. Event handlers can send an email or an SMS, call a pager, and so on; or intervene in the operation of their own suite using cylc commands. cylc [hook]email-suite and cylc [hook]email-task are example event handlers packaged with cylc.

Custom task event handlers can be located in the suite bin directory. They are passed the following arguments by cylc:

<handler> EVENT SUITE TASKID MESSAGE

where EVENT is one of the following strings:

MESSAGE, if provided, describes what has happened, and TASKID identifies the task (NAME.CYCLE for cycling tasks).

The retry event occurs if a task fails and has any remaining retries configured (see Section 11.7). The event handler will be called as soon as the task fails, not after the retry delay period when it is resubmitted.

Note that event handlers are called by cylc itself, not by the running tasks so if you wish to pass them additional information via the environment you must use [cylc] [[environment]], not task runtime environments.

Here is an example suite that tests the retry and failed events. The handler in this case simply echoes its command line arguments to suite stdout.

[scheduling] 
    initial cycle time = 2010080800 
    final cycle time = 2010081000 
    [[dependencies]] 
        [[[0]]] 
            graph = "foo => bar" 
[runtime] 
    [[foo]] 
        retry delays = 0, 0.5 
        command scripting = """ 
echo TRY NUMBER: $CYLC_TASK_TRY_NUMBER 
sleep 10 
# retry twice and succeed on the final try, 
# but fail definitively in the final cycle. 
if (( CYLC_TASK_TRY_NUMBER <= 2 )) || \ 
    (( CYLC_TASK_CYCLE_TIME == CYLC_SUITE_FINAL_CYCLE_TIME )); then 
    echo ABORTING 
    /bin/false 
fi""" 
        [[[event hooks]]] 
            retry handler = "echo !!!!!EVENT!!!!! " 
            failed handler = "echo !!!!!EVENT!!!!! "

11.9 Reloading The Suite Definition At Runtime

The cylc reload command, an experimental feature new in cylc-4.5.2, reloads the suite definition at run time. This allows: (a) changing task config such as command scripting or environment; (b) adding tasks to, or removing them from, the suite definition, at run time - without shutting the suite down and restarting it. (It is easy to shut down and restart cylc suites, but reloading may be useful if you don’t want to wait for long-running tasks to finish first).

Note that defined tasks can be already be added to or removed from a running suite with the ’cylc insert’ and ’cylc remove’ commands; the reload command allows addition and removal of task definitions. If a new task is definition is added (and used in the graph) you will still need to manually insert an instance of it (with a particular cycle time) into the running suite. If a task definition (and its use in the graph) is deleted, existing task proxies of the of the deleted type will run their course after the reload but new instances will not be spawned. Changes to a task definition will only take effect when the next task instance is spawned (existing instances will not be affected).

11.10 Handling Job Preemption

Some HPC facilities allow job preemption: the resource manager can kill or suspend running low priority jobs in order to make way for high priority jobs. The preempted jobs may then be automatically restarted by the resource manager, from the same point (if suspended) or requeued to run again from the start (if killed). If a running cylc task gets suspended or hard-killed (kill -9 <PID> is not a trappable signal so cylc cannot detect task failure in this case) and then later restarted, it will just appear to cylc as if it takes longer than normal to run. If the job is soft-killed the signal will be trapped by the task job script and a failure message sent, resulting in cylc putting the task into the failed state. When the preempted task restarts and sends its started message cylc would normally treat this as an error condition (a dead task is not supposed to be sending messages) - a warning will be logged and the task will remain in the failed state. However, if you know that preemption is possible on your system you can tell cylc that affected tasks should be resurrected from the dead, to carry on as normal if progress messages start coming in again after a failure:

# SUITE.RC 
# ... 
[runtime] 
    [[on_HPC]] 
        enable resurrection = True 
    [[TaskFoo]] 
        inherit = on_HPC 
# ...

To test this in any suite, manually kill a running task then, after cylc registers the task failed, resubmit the killed job manually by cutting-and-pasting the original job submission command from the suite stdout stream.

11.11 Runtime Settings Broadcast and Communication Between Tasks

The cylc broadcast command overrides [runtime] settings in a running suite. This can be used to communicate information to downstream tasks by broadcasting environment variables (communication of information from one task to another normally takes place via the filesystem, i.e. the input/output file relationships embodied in inter-task dependencies). Variables (and any other runtime settings) may be broadcast to all subsequent tasks, or targetted specifically at a specific task, all subsequent tasks with a given name, or all tasks with a given cycle time; see broadcast command help for details.

Broadcast settings targetted at a specific task ID or cycle time expire and are forgotten as the suite moves on. Untargetted variables and those targetted at a task name persist throughout the suite run, even across restarts, unless manually cleared using the broadcast command - and so should be used sparingly.

11.12 The Meaning And Use Of Initial Cycle Time

When a suite is started with the cylc run command (cold or warm start) the cycle time at which it starts can be given on the command line or hardwired into the suite.rc file:

cylc run foo 2012080806

or,

# SUITE.RC 
[scheduling] 
    initial cycle time = 2010080806

An initial cycle time given on the command line will override one in the suite.rc file.

11.12.1 The Environment Variable CYLC_SUITE_INITIAL_CYCLE_TIME

In the case of cold starts only the initial cycle time will also be passed through to task execution environments as $CYLC_SUITE_INITIAL_CYCLE_TIME. The intended use of this variable is to allow tasks to determine whether they are running in the initial cold-start cycle (when different behaviour may be required) or in a normal mid-run cycle. This is not done for warm starts because a warm start is really an implicit restart - it does not reference a particular previous suite state but it does assume that a previous cycle (for each task) has been run and completed entirely. It follows that in a warm start tasks are really in a normal mid-run cycle, and because no actual previous state is referenced $CYLC_SUITE_INITIAL_CYCLE_TIME gets the value None. After a cold-start, however, the value of the environment variable does persist across restarts because the original cold-start cycle time is stored in suite state dump files.

11.13 The Simulation And Dummy Run Modes

Since cylc-4.6.0 any cylc suite can run in live, simulation, or dummy mode. Prior to that release simulation mode was a hybrid mode that replaced real tasks with local dummy tasks. This allowed local simulation testing of any suite, to get the scheduling right without running real tasks, but running dummy tasks locally does not add much value over a pure simulation (in which no tasks are submitted at all) because all job submission configuration has to be ignored and most task job script sections have to be cut out to avoid any code that could potentially be specific to the intended task host. So at 4.6.0 we replaced this with a pure simulation mode (task proxies go through the running state automatically within cylc, and no dummy tasks are submitted to run) and a new dummy mode in which only the real task command scripting is dummied out - each dummy task is submitted exactly as the task it represents on the correct host and in the same execution environment. A successful dummy run confirms not only that the scheduling works correctly but also tests real job submission, communication from remote task hosts, and the real task job scripts (in which errors such as use of undefined variables will cause a task to fail).

The run mode, which defaults to live, is set on the command line (for run and restart):

% cylc run --mode=dummy SUITE

but you can configure the suite to force a particular run mode,

# SUITE.RC 
[cylc] 
    force run mode = simulation

This can be used, for example, for demo suites that necessarily run out of their original context; or to temporarily prevent accidental execution of expensive real tasks during suite development.

Dummy task command scripting just prints a message and sleeps for ten seconds by default, but you can override this behaviour for particular tasks or task groups if you like. Here’s how to make a task sleep for twenty seconds and then fail in dummy mode:

# SUITE.RC 
[runtime] 
    [[foo]] 
        command scripting = "run-real-task.sh" 
        dummy mode command scripting = """ 
echo "hello from dummy task $CYLC_TASK_ID" 
sleep 20 
echo "ABORTING" 
/bin/false"""

Finally, in simulation mode each task takes between 1 and 15 seconds to “run” by default, but you can also alter this for particular tasks or groups of tasks:

# SUITE.RC 
[runtime] 
    [[foo]] 
        run time range = 20,31 # (between 20 and 30 seconds) 
        command scripting = "echo ABORTING; /bin/false" # fail in dummy mode

Note that to get a failed simulation or dummy mode task to succeed on re-triggering, just change the suite.rc file appropriately and reload the suite definition at run time with cylc reload SUITE before re-triggering the task.

Dummy mode is equivalent to commenting out each task’s command scripting to expose the default scripting.

11.13.1 The Non-live-mode Accelerated Clock

In simulation and dummy mode cylc uses an accelerated clock with configurable rate and offset relative to the suite’s initial cycle time. This affects the trigger time of any clock-triggered tasks in the suite, and the length of time between cycles if simulating “caught up” operation (without this a six-hour cycling suite, for instance, would wait six hours between cycles when simulating caught-up operation, even though the simulated or dummy tasks run very quickly). By configuring the initial clock offset you can quickly simulate how suites catch up and transition from delayed to real time operation.

See Section A.2.10 for accelerated clock configuration settings.

11.13.2 Restarting Suites With A Different Run Mode?

The run mode is recorded in the suite state dump file. Cylc will not let you restart a non-live mode suite in live mode, or vice versa - any attempt to do the former would certainly be a mistake (because the simulation mode dummy tasks do not generate any of the real outputs depended on by downstream live tasks), and the latter, while feasible, would corrupt the live state dump by turning it over to simulation mode. The easiest way to test a live suite in simulation mode, if you don’t want to obliterate the current state dump by doing a cold or warm start (as opposed to a restart from the previous state) is to take a quick copy of the suite and run the copy in simulation mode. However, if you really want to run a live suite forward in simulation mode without copying it, do this:

  1. Back up the live mode suite state dump file.
  2. Edit the mode line in the state dump and restart in simulation mode.
  3. Later, restart the live suite from the restored live state dump back up.

11.14 Automated Reference Test Suites

Reference tests are finite-duration suite runs that abort with non-zero exit status if any of the following conditions occur (by default):

The default shutdown event handler for reference tests is cylc hook check-triggering which compares task triggering information (what triggers off what at run time) in the test run suite log to that from an earlier reference run, disregarding the timing and order of events - which can vary according to the external queueing conditions, runahead limit, and so on.

To prepare a reference log for a suite, run it with the --reference-log option, and manually verify the correctness of the reference run.

To reference test a suite, just run it (in dummy mode for the most comprehensive test without running real tasks) with the --reference-test option.

A battery of reference tests are (or will soon be) used to automatically test cylc before posting a new release version. They can also be used at cylc upgrade time to check that changes in cylc will not break your own complex suites - the triggering check will catch any bug that causes a task to run when it shouldn’t, for instance; and even in a dummy mode reference test the full task job script (sans real command scripting) has to execute successfully on the proper task host by the proper job submission method.

The reference test can be configured with the following settings:

# SUITE.RC 
[cylc] 
    [[reference test]] 
        suite shutdown event handler = cylc check-triggering 
        required run mode = dummy 
        allow task failures = False 
        live mode suite timeout = 5 # minutes 
        dummy mode suite timeout = 2 
        simulation mode suite timeout = 2

11.14.1 Roll-your-own Reference Tests

If the default reference test is not sufficient for your needs, firstly note that you can override the default shutdown event handler, and secondly that the --reference-test option is merely a short cut to the following suite.rc settings which can also be set manually if you wish:

# SUITE.RC 
[cylc] 
    abort if any task fails = True 
    [[event hooks]] 
        shutdown handler = cylc check-triggering 
        timeout = 5 
        abort if shutdown handler fails = True 
        abort on timeout = True

12 Other Topics In Brief

The following topics have yet to be documented in detail.

13 Suite Discovery, Sharing, And Revision Control

Until release 4.2.2 cylc had a “central suite database” that users could export to and import from, for sharing suites. It was essentially just a special instance of a user suite database, held under the cylc admin account and with associated export and import commands to copy suite definition directories to and from a central store, with the suite owner’s username as the first hierarchical name component. However, it was not on the network, and the suite store had to be writeable by all users and hence very insecure. Rather than develop a network server for better security and wider access, as was the original intention, it was decided to remove this functionality entirely for the following reasons:

We may in the future recommend particular tools that can be used for suite discovery, revision control, and so on, with cylc suites.

14 Suite Design Principles

 14.1 Make Fine-Grained Suites
 14.2 Make Tasks Rerunnable
 14.3 Make Models Rerunnable
 14.4 Limit Previous-Instance Dependence
 14.5 Put Task Cycle Time In All Output File Paths
 14.6 How To Manage Input/Output File Dependencies
 14.7 Use Generic Task Scripts
 14.8 Make Suites Portable
 14.9 Make Tasks As Self-Contained As Possible
 14.10 Make Suites As Self-Contained As Possible
 14.11 Orderly Product Generation?
 14.12 Clock-triggered Tasks Wait On External Data
 14.13 Do Not Treat Real Time Operation As Special

14.1 Make Fine-Grained Suites

A suite can contain a small number of large, internally complex tasks; a large number of small, simple tasks; or anything in between. Cylc can easily handle a large number of tasks, however, so there are definite advantages to fine-graining:

14.2 Make Tasks Rerunnable

It should be possible to rerun a task by simply resubmitting it for the same cycle time. In other words, failure at any point during execution of a task should not render a rerun impossible by corrupting the state of some internal-use file, or whatever. It is difficult to overstate the usefulness of being able to rerun the same task multiple times, either outside of the suite with cylc submit, or by retriggering it within the running suite, when debugging a problem.

14.3 Make Models Rerunnable

If a warm-cycled model simply overwrites its restart files in each run, the only cycle that can subsequently run is the next one. This is dangerous because if, accidentally or otherwise, the task runs for the wrong cycle time, its restart files will be corrupted such that the correct cycle can no longer run (probably necessitating a cold-start). Instead, consider organising restart files by cycle time, through a file or directory naming convention, and keep them in a simple rolling archive (cylc’s filename templating and housekeeping utilities can easily do this for you). Then, given availability of external inputs, you can easily rerun the task for any cycle still in the restart archive.

14.4 Limit Previous-Instance Dependence

Cylc does not require that successive instances of the same task run sequentially. In order to task advantage of this and achieve maximum functional parallelism whenever the opportunity arises (usually when catching up from a delay) you should ensure that tasks that in principle do not depend on their own previous instances (the vast majority of tasks in most suites, in fact) do not do so in practice. In other words, they should be able to run as soon as their prerequisites are satisfied regardless of whether or not their predecessors have finished yet. This generally just means ensuring that all file I/O contains the generating task’s cycle time in the file or directory name so that there is no interference between successive instances. If this is difficult to achieve in particular cases, however, you can declare the offending tasks to be sequential.

14.5 Put Task Cycle Time In All Output File Paths

Having all filenames, or perhaps the names of their containing directories, stamped with the cycle time of the generating task greatly aids in managing suite disk usage, both for archiving and cleanup. It also enables the aforementioned task rerunnability recommendation by avoiding overwrite of important files from one cycle to the next. Cylc has powerful utilities for cycle time offset filename templating and housekeeping.

14.5.1 Use Cylc Cycle Time Filename Templating

The command line utility program cylc [util] cycletime computes offsets (in hours, days, months, and years) from a given or current (in the environment) cycle time, and optionally inserts the resulting computed cycle time, or components of it, into a given template string containing “YYYY” as a placeholder for the year value, “MM” for month, and so on. This can be used in the suite.rc environment or command scripting sections, or in task implementation scripting, to generate filenames containing the current cycle time (or some offset from it) for use by tasks.

See cylc [util] cycletime --help for examples.

14.6 How To Manage Input/Output File Dependencies

Dependencies between tasks usually, though not always, take the form of files generated by one task that are used by other tasks. It is possible to manage these files across a suite without hard wiring I/O locations and therefore compromising suite flexibility and portability.

14.7 Use Generic Task Scripts

If your suite contains multiple logically distinct tasks that actually have similar functionality (e.g. for moving files around, or for generating similar products from the output of several similar models) have the corresponding cylc tasks all call the same command, script, or executable - just provide different input parameters via the task command scripting and/or execution environment, in the suite.rc file.

14.8 Make Suites Portable

If every task in a suite is configured to put its output under $HOME (i.e. the environment variable, literally, not the explicit path to your home directory; and similarly for temporary directories, etc.) then other users will be able to copy the suite and run it immediately, after merely ensuring that any external input files are in the right place.

For the ultimate in portability, construct suites in which all task I/O paths are dynamically configured to be user and suite (registration) specific, e.g.

$HOME/output/$CYLC_SUITE_REG_PATH

(these variables are automatically exported to the task execution environment by cylc - see Task Execution Environment, Section 8.4.7). Then you can run multiple instances of the suite at once (even under the same user account) without changing anything, and they will not interfere with each other.

You can test changes to a portable suite safely by making a quick copy of it in a temporary directory, then modifying and running the test copy without fear of corrupting the output directories, suite logs, and suite state, of the original.

14.9 Make Tasks As Self-Contained As Possible

Where possible, no task should rely on the action of another task, except for the prerequisites embodied in the suite dependency graph that it has no choice but to depend on. If this rule is followed, your suite will be as flexible as possible in terms of being able to run single tasks, or subsets of the suite, whilst debugging or developing new features.9 For example, every task should create its own output directories if they do not already exist, instead of assuming their existence due to the action of some other task; then you will be able to run single tasks without having to manually create output directories first.

# manual task scripting: 
  # 1/ create $OUTDIR if it doesn't already exist: 
  mkdir -p $OUTDIR 
  # 2/ create the parent directory of $OUTFILE if it doesn't exist: 
  mkdir -p $( dirname $OUTFILE ) 
 
# OR using the cylc checkvars utility: 
  # 1/ check vars are defined, and create directories if necessary: 
  cylc util checkvars -c OUTDIR1 OUTDIR2 #... 
  # 2/ check vars are defined, and create parent dirs if necessary: 
  cylc util checkvars -p OUTFILE1 OUTFILE2 #...

14.10 Make Suites As Self-Contained As Possible

The only compulsory content of a cylc suite definition directory is the suite.rc file (and you’ll almost certainly have a suite bin sub-directory too). However, you can store whatever you like in a suite definition directory;10 other files there will be ignored by cylc but suite tasks can access them via the $CYLC_SUITE_DEF_PATH variable that cylc automatically exports into the task execution environment. Disk space is cheap - if all programs, ancillary files, control files (etc.) required by the suite are stored in the suite definition directory instead of having the suite reference external build directories (etc.), you can turn the directory into a revision control repository and be virtually assured of the ability to exactly reproduce earlier versions, regardless of suite complexity.

14.11 Orderly Product Generation?

Correct scheduling is not equivalent to “orderly generation of products by cycle time”. Under cylc, a product generation task will trigger as soon as its prerequisites are satisfied (i.e. when its input files are ready, generally) regardless of whether other tasks with the same cycle time have finished or have yet to run. If your product delivery or presentation system demands that all products for one cycle time are uploaded (or whatever) before any from the next cycle, then be aware that this may be quite inefficient if your suite is ever faced with catching up from a significant delay or running over historical data.

If you must, however, you can introduce artificial dependencies into your suite to ensure that the final products never arrive out of sequence. One way of doing this would be to have a final “product upload” task that depends on completion of all the real product generation tasks at the same cycle time, and then declare it to be sequential.

14.12 Clock-triggered Tasks Wait On External Data

All tasks in a cylc suite know their own private cycle time, but most don’t care about the wall clock time - they just run when their prerequisites are satisfied. The exception to this is clock-triggered tasks, which wait on a wall clock time expressed as an offset from their own cycle time, in addition to any other prerequisites. The usual purpose of these tasks is to retrieve real time data from the external world, triggering at roughly the expected time of availability of the data. Triggering the task at the right time is up to cylc, but the task itself should go into a check-and-wait loop in case the data is delayed; only on successful detection or retrieval should the task report success and then exit (or perhaps report failure and then exit if the data has not arrived by some cutoff time).

14.13 Do Not Treat Real Time Operation As Special

Cylc suites, without modification, can handle real time and delayed operation equally well.

In real time operation clock-triggered tasks constrain the behaviour of the whole suite, or at least of all tasks downstream of them in the dependency graph.

In delayed operation (whether due to an actual delay in an operational suite or because you’re running an historical trial) clock-triggered tasks will not constrain the suite at all, and cylc’s cycle interleaving abilities come to the fore, because their trigger times have already passed. But if a clock-triggered task happens to catch up to the wall clock, it will automatically wait again. In this way a cylc suite naturally and seamlessly transitions between delayed and real time operation as required.

A Suite.rc Reference

 A.1 Top Level Items
 A.2 [cylc]
 A.3 [scheduling]
 A.4 [runtime]
 A.5 [visualization]
 A.6 Special Placeholder Variables In Suite Definitions
 A.7 Default Suite Configuration

This appendix documents legal content of raw cylc suite.rc files. Many items have sensible default values, and most suites may only need to explicitly configure a few of them.

In addition to the configuration items described below, Jinja2 expressions can also be embedded to programmatically generate the final suite definition seen by cylc. Use of Jinja2 is documented in Section 8.6.

See also Suite Definition - Suite.rc Overview (Section 8.2) for a descriptive overview of suite.rc files.

A.1 Top Level Items

The only top level configuration items at present are the suite title and description.

A.1.1 title

A single line description of the suite. It is displayed in the db viewer window and can be retrieved at run time with the cylc show command.

A.1.2 description

A multi-line description of the suite. It can be retrieved by the db viewer right-click menu, or at run time with the cylc show command.

A.2 [cylc]

This section is for suite configuration that is not specifically task-related.

A.2.1 [cylc] required run mode

If this item is set cylc will abort if the suite is not started in the specified mode. This can be used for demo suites that have to be run in simulation mode, for example, because they have been taken out of their normal operational context; or to prevent accidental submission of expensive real tasks during suite development.

A.2.2 [cylc] UTC mode

Cylc runs off the suite host’s system clock by default. This item allows you to run the suite in UTC even if the system clock is set to local time. Clock-triggered tasks will trigger when the current UTC time is equal to their cycle time plus offset; other time values used, reported, or logged by cylc will also be in UTC.

A.2.3 [cylc] abort if any task fails

Cylc does not normally abort if tasks fail, but if this item is turned on it will abort with exit status 1 if any task fails.

A.2.4 [cylc] log resolved dependencies

If this is turned on cylc will write the resolved dependencies of each task to the suite log as it becomes ready to run (a list of the IDs of the tasks that actually satisfied its prerequisites at run time). Mainly used for cylc testing and development.

A.2.5 [cylc] job submission

Tasks ready to submit are now queued for processing in a background worker thread, so submitting a lot of tasks at once does not hold cylc back. In the job submission thread tasks are are batched, with members of each batch being submitted in parallel. Batches are processed serially, with a delay between batches, to avoid swamping the host system with too many simultaneous job submissions.

The time required for a single task’s job submission to complete typically depends on whether it is a remote task (for which an ssh connection must be established and used) and whether dynamic host selection is used (see A.4.1.16.1 (a dynamic host selection command runs as part of the job submission command). The time taken for a batch of parallel job submissions to complete will be roughly the duration of the slowest member process.

[cylc] [job submission] batch size The maximum number of tasks to be submitted in a single batch, in the job submission thread. Cylc waits for all batch member job-submissions to complete before proceeding to the next batch.

[cylc] [job submission] delay between batches It may cause a problem for some batch queue schedulers to submit too many jobs at once, so cylc allows a configurable delay between job submission batches.

A.2.6 [cylc] [event handler execution]

Task event handlers are queued to a background worker thread that limits the number of event handlers that can run at once, as for task job submissions (above), because of the potential for large suites to swamp the suite host with event handlers. Event handlers are currently executed directly as sub-processes (i.e. unlike task job submission you can’t specify the “job submission method” to use).

Note that suite event handlers are currently executed directly in the main scheduler thread, not queued to the task event handler worker thread.

[cylc] [event handler execution] batch size The maximum number of event handlers to be executed in parallel in the worker thread. Cylc waits for all batch members to complete before proceeding on to the next batch.

[cylc] [event handler execution] delay between batches A configurable delay between batches of task event handlers.

A.2.7 [cylc] [[event hooks]]

Cylc has internal “hooks” to which you can attach handlers that are called whenever certain events occur. This section is for configuring suite event hooks. See Section A.4.1.17 for task event hooks.

Event handlers can send an email or an SMS, call a pager, and so on; or intervene in the operation of their own suite using cylc commands. The command cylc [hook] email-suite is a ready-made suite event handler.

Custom suite event handlers can be located in the suite bin directory, in which case you will not need to modify your $PATH to ensure they are found. They are called by cylc with the following arguments:

<handler> EVENT SUITE MESSAGE

EVENT is the event name (see below), SUITE is the suite name, and MESSAGE, if provided, describes what has happened.

Note that event handlers are called by cylc itself so if you wish to pass additional information to them via the environment you must use [cylc] [[environment]], not task runtime environments (suite-level variables - $CYLC_SUITE_INITIAL_CYCLE_TIME etc. - are exported into the cylc environment, however).

[cylc] [[event hooks]] EVENT handler The handler to call when the suite event EVENT occurs. Repeat this section for each of the following suite events that you wish to handle:

[cylc] [[event hooks]] timeout If a timeout is set and the timeout event is handled, the event handler will be called if the suite times out before it finishes. The timer is set initially at suite start up.

[cylc] [[event hooks]] reset timer If True (the default) the suite timer will continually reset after any task changes state, so you can time out after some interval since the last activity occured rather than on absolute suite execution time.

[cylc] [[event hooks]] abort on timeout If a suite timer is set (above) this will cause the suite to abort with error status if the suite times out while still running.

[cylc] [[event hooks]] abort if EVENT handler fails Cylc does not normally care whether an event handler succeeds or fails, but if this is turned on the EVENT handler will be executed in the foreground (which will block the suite while it is running) and the suite will abort if the handler fails.

A.2.8 [cylc] [[lockserver]]

The cylc lockserver brokers suite and task locks on the network (these are somewhat analagous to traditional local lock files). It prevents multiple instances of a suite or task from being invoked at the same time (via scheduler instances or cylc submit).

See cylc lockserver --help for how to run the lockserver, and cylc lockclient --help for occasional manual lock management requirements.

[cylc] [[lockserver]] enable The lockserver is currently disabled by default. It is intended mainly for operational use.

[cylc] [[lockserver]] simultaneous instances By default the lockserver prevents multiple simultaneous instances of a suite from running even under different registered names. But allowing this may be desirable if the I/O paths of every task in the suite are dynamically configured to be suite specific (and similarly for the suite state dump and logging directories, by using suite identity variables in their directory paths). Note that the lockserver cannot protect you from running multiple distinct copies of a suite simultaneously.

A.2.9 [cylc] [[environment]]

Variables defined here are exported into the environment in which cylc itself runs. They are then available to local processes spawned directly by cylc. Any variables read by task event handlers must be defined here, for instance, because event handlers are executed directly by cylc, not by running tasks. And similarly the command lines issued by cylc to invoke event handlers or to submit task job scripts could, in principle, make use of environment variables defined here.

Warnings

[cylc] [[environment]] __VARIABLE__ Replace __VARIABLE__ with any number of environment variable assignment expressions. Values may refer to other local environment variables (order of definition is preserved) and are not evaluated or manipulated by cylc, so any variable assignment expression that is legal in the shell in which cylc is running can be used (but see the warning above on variable expansions, which will not be evaluated). White space around the ‘=’ is allowed (as far as cylc’s suite.rc parser is concerned these are normal configuration items).

A.2.10 [cylc] [[accelerated clock]]

Accelerated clock settings, used to speed up the wait between cycles in the simulation and dummy run modes.

[cylc] [[accelerated clock]] disable Disabling the accelerated clock makes the suite (and its log time stamps etc.) run on real time. Note that if the suite has clock-triggered tasks that catch up to the wall clock, the interval between cycles will also be in real time - e.g. six hours for a six hourly cycle.

[cylc] [[accelerated clock]] rate The rate at which the accelerated clock runs in real seconds per simulated hour.

[cylc] [[accelerated clock]] offset The clock offset determines the initial time on the accelerated clock, at suite startup, relative to the initial cycle time. An offset of 0 simulates real time operation; greater offsets simulate catch up from a delay and subsequent transition to real time operation.

A.2.11 [cylc] [[reference test]]

Reference tests are finite-duration suite runs that abort with non-zero exit status if cylc fails, if any task fails, if the suite times out, or if a shutdown event handler that (by default) compares the test run with a reference run reports failure. See Automated Reference Test Suites, Section 11.14.

[cylc] [[reference test]] suite shutdown event handler A shutdown event handler that should compare the test run with the reference run, exiting with zero exit status only if the test run verifies.

As for any event handler, the full path can be ommited if the script is located somewhere in $PATH or in the suite bin directory.

[cylc] [[reference test]] required run mode If your reference test is only valid for a particular run mode, this setting will cause cylc to abort if a reference test is attempted in another run mode.

[cylc] [[reference test]] allow task failures A reference test run will abort immediately if any task fails, unless this item is set, or a list of expected task failures is provided (below).

[cylc] [[reference test]] expected task failures A reference test run will abort immediately if any task fails, unless allow task failures is set (above) or the failed task is found in a list IDs of tasks that are expected to fail.

[cylc] [[reference test]] live mode suite timeout The timeout value in minutes after which the test run should be aborted if it has not finished, in live mode. Test runs cannot be done in live mode unless you define a value for this item, because it is not possible to arrive at a sensible default for all suites.

[cylc] [[reference test]] simulation mode suite timeout The timeout value in minutes after which the test run should be aborted if it has not finished, in simulation mode. Test runs cannot be done in simulation mode unless you define a value for this item, because it is not possible to arrive at a sensible default for all suites.

[cylc] [[reference test]] dummy mode suite timeout The timeout value in minutes after which the test run should be aborted if it has not finished, in dummy mode. Test runs cannot be done in dummy mode unless you define a value for this item, because it is not possible to arrive at a sensible default for all suites.

A.3 [scheduling]

This section allows cylc to determine when tasks are ready to run.

A.3.1 [scheduling] initial cycle time

At startup each cycling task (unless specifically excluded under [special tasks]) will be inserted into the suite with this cycle time, or with the closest subsequent valid cycle time for the task. Note that whether or not cold-start tasks, specified under [special tasks], are inserted, and in what state they are inserted, depends on the start up method - cold, warm, or raw. If this item is provided you can override it on the command line or in the gcylc suite start panel.

A.3.2 [scheduling] final cycle time

Cycling tasks are held once they pass the final cycle time, if one is specified. Once all tasks have achieved this state the suite will shut down. If this item is provided you can override it on the command line or in the gcylc suite start panel.

A.3.3 [scheduling] runahead limit

The suite runahead limit prevents the fastest tasks in a suite from getting too far ahead of the slowest ones, as documented in Section 11.6.1. Tasks exceeding the limit are put into a special runahead held state until slower tasks have caught up sufficiently.

A.3.4 [scheduling] [[queues]]

Configuration of internal queues, by which the number of simultaneously active tasks (submitted or running) can be limited, per queue. By default a single queue called default is defined, with all tasks assigned to it and no limit. To use a single queue for the whole suite just set the limit on the default queue as required. See also Section 11.6.2.

[scheduling] [[queues]] [[[__QUEUE__]]] Section heading for configuration of a single queue. Replace __QUEUE__ with a queue name, and repeat the section as required.

[scheduling] [[queues]] [[[__QUEUE__]]] limit The maximum number of active tasks allowed at any one time, for this queue.

[scheduling] [[queues]] [[[__QUEUE__]]] members A list of member tasks, or task family names, to assign to this queue (assigned tasks will automatically be removed from the default queue).

A.3.5 [scheduling] [[special tasks]]

This section identifies any tasks with special behaviour. By default (i.e. non “special” behaviour) tasks submit (or queue) as soon as their prerequisites are satisfied, and they spawn a successor at the next valid cycle time for the task as soon as they enter the submitted state11

[scheduling] [[special tasks]] clock-triggered Clock-triggered tasks wait on a wall clock time specified as an offset in hours relative to their own cycle time, in addition to any dependence they have on other tasks. Generally speaking, only tasks that wait on external real time data need to be clock-triggered. Note that in computing the trigger time the full wall clock time and cycle time are compared, not just hours and minutes of the day, so when running a suite in catchup/delayed operation, or over historical periods, clock-triggered tasks will not constrain the suite at all until they catch up to the wall clock.

[scheduling] [[special tasks]] start-up Start-up tasks are one-off tasks (they do not spawn a successor) that only run in the first cycle (and only in a cold-start) and any dependence on them is ignored in subsequent cycles. They can be used to prepare a suite workspace, for example, before other tasks run. Start-up tasks cannot appear in conditional trigger expressions with normal cycling tasks, because the meaning of the conditional expression becomes undefined in subsequent cycles.

[scheduling] [[special tasks]] cold-start A cold-start task is one-off task used to satisfy the dependence of an associated task with the same cycle time, on outputs from a previous cycle - when those outputs are not available. The primary use for this is to cold-start a warm-cycled forecast model that normally depends on restart files (e.g. model background fields) generated by its previous forecast, when there is no previous forecast. This is required when cold-starting the suite, but cold-start tasks can also be inserted into a running suite to restart a model that has had to skip some cycles after running into problems. Cold-start tasks can invoke real cold-start processes, or they can just be dummy tasks that represent some external process that has to be completed before the suite is started. Unlike start-up tasks, dependence on cold-start tasks is preseverved in subsequent cycles so they must typically be used in OR’d conditional expressions to avoid holding up the suite.

[scheduling] [[special tasks]] sequential By default, a task spawns a successor as soon as it is submitted to run so that successive instances of the same task can run in parallel if the opportunity arises (i.e. if their prerequisites happen to be satisfied before their predecessor has finished). Sequential tasks, however, will not spawn a successor until they have finished successfully. This should be used for (a) tasks that cannot run in parallel with their own previous instances because they would somehow interfere with each other (use cycle time in all I/O paths to avoid this); and (b) warm cycled forecast models that write out restart files for multiple cycles ahead (exception: see “explicit restart outputs” below).12

[scheduling] [[special tasks]] one-off Synchronous one-off tasks have an associated cycle time but do not spawn a successor. Synchronous start-up and cold-start tasks are automatically one-off tasks and do not need to be listed here. Dependence on one-off tasks is not restricted to the first cycle.

[scheduling] [[special tasks]] explicit restart outputs This is only required in the event that you need a warm cycled forecast model to start at the instant its restart files are ready (if other prerequisites are satisfied) even if its previous instance has not finished yet. If so, the model task has to depend on special output messages emitted by the previous instance as soon as its restart files are ready, instead of just on the previous instance finishing. Tasks in this category must define special restart output messages containing the word “restart”, in [runtime] [[TASK]] [[[outputs]]] - see Section 9.4.2.

[scheduling] [[special tasks]] exclude at start-up Any task listed here will be excluded from the initial task pool (this goes for suite restarts too). If an inclusion list is also specified, the initial pool will contain only included tasks that have not been excluded. Excluded tasks can still be inserted at run time. Other tasks may still depend on excluded tasks if they have not been removed from the suite dependency graph, in which case some manual triggering, or insertion of excluded tasks, may be required.

[scheduling] [[special tasks]] include at start-up If this list is not empty, any task not listed in it will be excluded from the initial task pool (this goes for suite restarts too). If an exclusion list is also specified, the initial pool will contain only included tasks that have not been excluded. Excluded tasks can still be inserted at run time. Other tasks may still depend on excluded tasks if they have not been removed from the suite dependency graph, in which case some manual triggering, or insertion of excluded tasks, may be required.

A.3.6 [scheduling] [[dependencies]]

The suite dependency graph is defined under this section. You can plot the dependency graph as you work on it, with cylc graph or by right clicking on the suite in the db viewer. See also Section 8.3.

[scheduling] [[dependencies]] graph The dependency graph for any one-off asynchronous (non-cycling) tasks in the suite goes here. This can be used to construct a suite of one-off tasks (e.g. build jobs and related processing) that just completes and then exits, or an initial suite section that completes prior to the cycling tasks starting (if you make the first cycling tasks depend on the last one-off ones). But note that synchronous start-up tasks can also be used for the latter purpose. See Section A.3.6.2.1 below for graph string syntax, and Section 8.3.

[scheduling] [[dependencies]] [[[__VALIDITY__]]] __VALIDITY__ section headings define the sequence of cycle times for which the subsequent graph section is valid. For cycling tasks use a comma-separated list of integer hours, 0 H 23 for the original hours-of-the-day cycling, or reference a particular stepped daily, monthly, or yearly cycling module:

For repeating asynchronous tasks put ‘ASYNCID:pattern’ in the section heading, where pattern is a regular expression that matches an asynchronous task ID:

See Section 8.3.3, Graph Types for the meaning of the stepped cycler arguments, how multiple graph sections combine within a single suite, and so on.

[scheduling] [[dependencies]] [[[__VALIDITY__]]] graph The dependency graph for the specified validity section (described just above) goes here. Syntax examples follow; see also Sections 8.3 (Configuring Scheduling) and 8.3.4 (Trigger Types).

[scheduling] [[dependencies]] [[[__VALIDITY__]]] daemon For [[[ASYNCID:pattern]]] validity sections only, list asynchronous daemon tasks by name. This item is located here rather than under [scheduling] [[special tasks]] because a damon task is associated with a particular asynchronous ID.

A.4 [runtime]

This section is used to specify how, where, and what to execute when tasks are ready to run. Common configuration can be factored out in a multiple-inheritance hierarchy of runtime namespaces that culminates in the tasks of the suite. Order of precedence is determined by the C3 linearization algorithm as used to find the method resolution order in Python language class hiearchies. For details and examples see Section 8.4, Runtime Properties.

A.4.1 [runtime] [[__NAME__]]

Replace __NAME__ with a namespace name, or a comma separated list of names, and repeat as needed to define all tasks in the suite. Names may contain letters, digits, underscores, and hyphens. A namespace represents a group or family of tasks if other namespaces inherit from it, or a task if no others inherit from it.

If multiple names are listed the subsequent settings apply to each.

All namespaces inherit initially from root, which can be explicitly configured to provide or override default settings for all tasks in the suite.

[runtime] [[__NAME__]] inherit A list of the immediate parent(s) this namespace inherits from. If no parents are listed root is assumed.

[runtime] [[__NAME__]] title A single line description of this namespace. It is displayed by the cylc list command and can be retrieved from running tasks with the cylc show command.

[runtime] [[__NAME__]] description A multi-line description of this namespace, retrievable from running tasks with the cylc show command.

[runtime] [[__NAME__]] initial scripting Initial scripting is executed at the top of the task job script just before the cylc task started message call is made, and before the task execution environment is configured - so it does not have access to any suite or task environment variables. The original intention was to allow remote tasks to source login scripts before calling the first cylc command, e.g. to set $PYTHONPATH if Pyro has been installed locally. Note however that the remote task invocation mechanism now automatically sources both /etc/profile and $HOME/.profile if they exist. For other uses pre-command scripting should be used if possible because it can has access to the task execution environment.

[runtime] [[__NAME__]] environment scripting Environment scripting is inserted into the task job script between the cylc-defined environment (suite and task identity, etc.) and the user-defined task runtime environment - i.e. it has access to the cylc environment, and the task environment has access to the results of this scripting.

[runtime] [[__NAME__]] command scripting The scripting to execute when the associated task is ready to run - this can be a single command or multiple lines of scripting.

[runtime] [[__NAME__]] retry delays A list of time intervals in minutes, after which to resubmit the task if it reports failure. The variable $CYLC_TASK_TRY_NUMBER in the task execution environment is incremented each time, starting from 1 for the original try; this can be used to vary task behavior according to the try number.

[runtime] [[__NAME__]] pre-command scripting Scripting to be executed immediately before the command scripting. This would typically be used to add scripting to every task in a family (for individual tasks you could just incorporate the extra commands into the main command scripting). See also post-command scripting, below.

[runtime] [[__NAME__]] post-command scripting Scripting to be executed immediately after the command scripting. This would typically be used to add scripting to every task in a family (for individual tasks you could just incorporate the extra commands into the main command scripting). See also pre-command scripting, above.

[runtime] [[__NAME__]] manual completion If a task’s initiating process detaches and exits before task processing is finished then cylc cannot arrange for the task to automatically signal when it has succeeded or failed. In such cases you must use this configuration item to tell cylc not to arrange for automatic completion messaging, and insert some minimal completion messaging yourself in appropriate places in the task implementation (see Section 9.4.4).

[runtime] [[__NAME__]] enable resurrection If a message is received from a failed task cylc will normally treat this as an error condition, issue a warning, and leave the task in the “failed” state. But if “enable resurrection” is switched on failed tasks can come back from the dead: if the same task job script is executed again cylc will put the task back into the running state and continue as normal when the started message is received. This can be used to handle HPC-style job preemption wherein a resource manager may kill a running task and reschedule it to run again later, to make way for a job with higher immediate priority. See also Section 11.10, Handling Job Preemption

[runtime] [[__NAME__]] log directory This is where task job scripts, and the stdout and stderr logs for local tasks, are written. The directory path may contain environment variables, including suite identity variables to make the path suite-specific (as the default value does) but it may not contain task identity variables such as $CYLC_TASK_NAME and $CYLC_TASK_CYCLE_TIME or any variables defined in the task environment section - the directory has to be created by cylc before the task runs so the directory creation process does not see the task execution environment. The job script filename is constructed, just before job submission, from the task ID and seconds since epoch, and then .out and .err are appended to construct the stdout and stderr log names, respectively. These filenames are thus unique even if a task gets retriggered and yet will be correctly time ordered if the log directory is listed. The filenames are also recorded by the task proxies for access via cylc commands and the suite control GUIs.

[runtime] [[__NAME__]] work directory Task command scripting is executed from within a work directory created on the fly, if necessary, by the task’s job script. In non-detaching tasks the work directory is automatically removed again if it is empty before the job script exits. The work directory can be accessed by tasks via the environment variable $CYLC_TASK_WORK_PATH.

[runtime] [[__NAME__]] share directory Like task work directories (above) this directory is created on the fly, if necessary, by the job script. It is intended as a shared data area for multiple tasks on the same host, but as for any task runtime config item it can be specialized to particular tasks or groups of tasks. It can be accessed by tasks at run time via the environment variable $CYLC_SUITE_SHARE_PATH.

[runtime] [[__NAME__]] [[[dummy mode]]] Dummy mode configuration.

[runtime] [[__NAME__]] [[[dummy mode]]] command scripting The scripting to execute when the associated task is ready to run, in dummy mode - this can be a single command or a multiple lines of scripting.

[runtime] [[__NAME__]] [[[dummy mode]]] disable pre-command scripting This disables pre-command scripting, is likely to contain code specific to the real task, in dummy mode.

[runtime] [[__NAME__]] [[[dummy mode]]] disable post-command scripting This disables post-command scripting, which is likely to contain code specific to the real task, in dummy mode.

[runtime] [[__NAME__]] [[[simulation mode]]] Simulation mode configuration.

[runtime] [[__NAME__]] [[[simulation mode]]] run time range This defines an interval [min,max) (seconds) from within which the the simulation mode task run length will be randomly chosen.

[runtime] [[__NAME__]] [[[job submission]]] This section configures the means by which cylc submits task job scripts to run.

[runtime] [[__NAME__]] [[[job submission]]] method See Task Job Submission (Section 10) for how job submission works, and how to define new methods. Cylc has a number of built in job submission methods:

[runtime] [[__NAME__]] [[[job submission]]] command template This allows you to override the actual command used by the chosen job submission method. The template’s first %s will be substituted by the job file path. Where applicable the second and third %s will be substituted by the paths to the job stdout and stderr files.

[runtime] [[__NAME__]] [[[job submission]]] shell This is the shell used to interpret the job script submitted by cylc when a task is ready to run. It has no bearing on the shell used in task implementations. Command scripting and suite environment variable assignment expressions must be valid for this shell. The latter is currently hardwired into cylc as export item=value - valid for both bash and ksh because value is entirely user-defined - but cylc would have to be modified slightly to allow use of the C shell.

[runtime] [[__NAME__]] [[[remote]]] Configur host and username, for tasks that do not run on the suite host. Cylc will use passwordless ssh to submit the task by the configured job submission method. Cylc must be installed on remote task hosts, but of the external software dependencies only Pyro is required there (actually, not even that if ssh messaging is used; see below). Passwordless ssh must be configured between the local suite owner on the suite host, and the task owner on the remote task host.

[runtime] [[__NAME__]] [[[remote]]] host The remote host for this namespace. This can be a static hostname or a command that prints a suitable hostname to stdout. Host selection commands are executed just prior to job submission. The host (static or dynamic) may have an entry in the cylc site or user config file to specify parameters such as the location of cylc on the remote machine; if not, the corresponding local settings (on the suite host) will be assumed to apply on the remote host.

[runtime] [[__NAME__]] [[[remote]]] owner The task owner username. This is (only) used in the passwordless ssh command line invoked by cylc to submit the remote task (consequently it may be defined using local environment variables (i.e. the shell in which cylc runs, and [cylc] [[environment]]).

[runtime] [[__NAME__]] [[[remote]]] suite definition directory The path to the suite definition directory on the remote host, needed if remote tasks require access to files stored there (via $CYLC_SUITE_DEF_PATH) or in the suite bin directory (via $PATH). If this item is not defined, the local suite definition directory path will be assumed, with the suite owner’s home directory, if present, replaced by '$HOME' for interpretation on the remote host.

[runtime] [[__NAME__]] [[[event hooks]]] See Section A.2.7 (Suite Event Hooks) for a general description of cylc event handling. This section is specific to task events. The command cylc [hook] email-task is a ready-made task event handler.

Custom task event handlers can be located in the suite bin directory, in which case you will not need to modify your $PATH to ensure they are found. They are called by cylc with the following arguments:

<handler> EVENT SUITE TASK MESSAGE

EVENT is the event name (see below), SUITE is the suite name, TASK is the task ID, and MESSAGE, if provided, describes what has happened. Note that spaces in event names will be replaced by underscores in the handler argument list to make parsing easier in the script, e.g. “submission failed” becomes “submission_failed”.

[runtime] [[__NAME__]] [[[event hooks]]] EVENT handler The handler to call when the task event EVENT occurs. Repeat this section for each of the following task events that you wish to handle:

MESSAGE, if provided, describes what has happened, and TASKID identifies the task (NAME.CYCLE for cycling tasks).

To handle timeouts you must also specify a timeout value, below.

[runtime] [[__NAME__]] [[[event hooks]]] submission timeout If a task has not started the specified number of minutes after it was submitted, the event handler will be called by cylc with submission_timeout as the EVENT argument:

[runtime] [[__NAME__]] [[[event hooks]]] execution timeout If a task has not finished the specified number of minutes after it started running, the event handler will be called by cylc with execution_timeout as the EVENT argument:

[runtime] [[__NAME__]] [[[event hooks]]] reset timer If you set an execution timeout the timer can be reset to zero every time a message is received from the running task (which indicates the task is still alive). Otherwise, the task will timeout if it does not finish in the alotted time regardless of incoming messages.

[runtime] [[__NAME__]] [[[environment]]] The user defined task execution environment. Variables defined here can refer to cylc suite and task identity variables, which are exported earlier in the task job script, and variable assignment expressions can use cylc utility commands because access to cylc is also configured earlier in the script. See also Task Execution Environment, Section 8.4.7.

[runtime] [[__NAME__]] [[[environment]]] __VARIABLE__ Replace __VARIABLE__ with any number of environment variable assignment expressions. Order of definition is preserved so values can refer to previously defined variables. Values are passed through to the task job script without evaluation or manipulation by cylc, so any variable assignment expression that is legal in the job submission shell can be used. White space around the ‘=’ is allowed (as far as cylc’s suite.rc parser is concerned these are just normal configuration items).

[runtime] [[__NAME__]] [[[directives]]] Batch queue scheduler directives. Whether or not these are used depends on the job submission method. For the built-in loadleveler, pbs, and sge methods directives are written to the top of the task job script in the correct format for the method. Specifying directives individually like this allows use of default directives that can be individually overridden at lower levels of the runtime namespace hierarchy.

[runtime] [[__NAME__]] [[[directives]]] __DIRECTIVE__ Replace __DIRECTIVE__ with each directive assignment, e.g. class = parallel

Example directives for the built-in job submission methods are shown in Section 10.2.

[runtime] [[__NAME__]] [[[outputs]]] This section is only required if other tasks need to trigger off specific internal outputs of this task (as opposed to triggering off it finishing). The task implementation must report the specified output messages by calling cylc task message when the corresponding real outputs have been completed.

[runtime] [[__NAME__]] [[[outputs]]] __OUTPUT__ Replace __OUTPUT__ with any number of labelled output messages.

A.5 [visualization]

Configuration of suite graphing and, where explicitly stated, the graph-based suite control GUI.

A.5.1 [visualization] initial cycle time

The first cycle time to use when plotting the suite dependency graph.

A.5.2 [visualization] final cycle time

The last cycle time to use when plotting the suite dependency graph. Typically this should be just far enough ahead of the initial cycle to show the full suite.

A.5.3 [visualization] collapsed families

A list of family (namespace) names to be shown in the collapsed state (i.e. the family members will be replaced by a single family node) when the suite is plotted in the graph viewer or the gcylc graph view. This item determines how family groups are shown initially in the suite control GUI; subsequently you can use the interactive controls to group and ungroup nodes at will. For the same reason (presence of interactive grouping controls) this item is ignored if the suite is reparsed during graph viewing (other changes to graph styling will be picked up and applied if the graph viewer detects that the suite.rc file has changed).

A.5.4 [visualization] use node color for edges

Graph edges (dependency arrows) can be plotted in the same color as the upstream node (task or family) to make paths through a complex graph easier to follow.

A.5.5 [visualization] use node color for labels

Graph node labels can be printed in the same color as the node outline.

A.5.6 [visualization] default node attributes

Set the default attributes (color and style etc.) of graph nodes (tasks and families). Attribute pairs must be quoted to hide the internal = character.

A.5.7 [visualization] default edge attributes

Set the default attributes (color and style etc.) of graph edges (dependency arrows). Attribute pairs must be quoted to hide the internal = character.

A.5.8 [visualization] enable live graph movie

If True, the graph-based suite control GUI will write out a dot-language graph file on every change; these can be post-processed into a movie showing how the suite evolves. The frames will be written to the run time graph directory (see below).

A.5.9 [visualization] [[node groups]]

Define named groups of graph nodes (tasks and families) which can styled en masse, by name, in [visualization] [[node attributes]]. Node groups are automatically defined for all task families, including root, so you can style family and member nodes at once by family name.

[visualization] [[node groups]] __GROUP__ Replace __GROUP__ with each named group of tasks or families.

A.5.10 [visualization] [[node attributes]]

Here you can assign graph node attributes to specific nodes, or to all members of named groups defined in [visualization] [[node groups]] (task families are automatically node groups). Group styling can be overridden for individual nodes or subgroups.

[visualization] [[node attributes]] __NAME__ Replace __NAME__ with each node or node group for style attribute assignment.

A.5.11 [visualization] [[runtime graph]]

Cylc can generate graphs of dependencies resolved at run time, i.e. what actually triggers off what as the suite runs. This feature is retained mainly for development and debugging purposes. You can use simulation mode or dummy mode to generate runtime graphs very quickly.

[visualization] [[runtime graph]] enable Runtime graphing is disabled by default.

[visualization] [[runtime graph]] cutoff New nodes will be added to the runtime graph as the corresponding tasks trigger, until their cycle time exceeds the initial cycle time by more than this cutoff, in hours.

[visualization] [[runtime graph]] directory Where to put the runtime graph file, runtime-graph.dot.

A.6 Special Placeholder Variables In Suite Definitions

See Section 8.7.

A.7 Default Suite Configuration

Cylc provides, via $CYLC_DIR/conf/suiterc/⋆.spec, sensible default values for many configuration items so that most users will not need to explicitly configure log directories and so on. The defaults are sufficient, in fact, to allow test suites defined by dependency graph alone (command scripting, for example, defaults to printing a simple message, sleeping for a few seconds, and then exiting).

The cylc get-config command parses a suite definition and retrieves configuration values for individual items, sections, or entire suites.

B Site/User Config File Reference

Cylc now has a site config file for settings that apply to all suites: $CYLC_DIR/conf/site/site.rc. Prior to cylc-5.0 some of these settings (e.g. log directory locations) had to be put in suite definitions, and some (e.g. preferred editors) were specified in the user’s environment. Many of the site settings can be overridden by users: $HOME/.cylc/user.rc

See also cylc get-global-config --help, which can be used to write an initial site or user config file with internal documentation and all items commented out.

As a temporary measure the self-documenting configspec for site/user config files is included verbatim here.

 
#>______________________________________________________________________ 
#> This is a ConfigObj configspec for cylc site and user configuration. 
#> All legal configuration items and default values are defined below. 
#>---------------------------------------------------------------------- 
#> HOW TO CUSTOMIZE SETTINGS FOR YOUR SITE: 
#>   ⋆⋆⋆ Do not modify this configspec file ⋆⋆⋆ 
#>   (1) Run "cylc get-global-config --write-site" to generate the file 
#>       $CYLC_DIR/conf/site.rc with all default settings commented out. 
#>   (2) Uncomment and modify specific configuration items as required. 
#>---------------------------------------------------------------------- 
#> HOW TO CUSTOMIZE SETTINGS FOR AN INDIVIDUAL USER: 
#>   ⋆⋆⋆ Do not modify this configspec file ⋆⋆⋆ 
#>   (1) Run "cylc get-global-config --write-user" to generate the file 
#>       $HOME/.cylc/cylc.rc with all default settings commented out. 
#>   (2) Uncomment and modify specific configuration items as required. 
#>---------------------------------------------------------------------- 
#>   NOTE THAT SITE AND/OR USER CONFIG IS REQUIRED ON TASK HOSTS TOO 
#>---------------------------------------------------------------------- 
#> Comments starting with "#>" are not passed on to generated .rc files. 
#>---------------------------------------------------------------------- 
# Sections or items preceded by "# SITE ONLY" can not be set by users. 
#> (trailing comments would be better markers but they don't get passed 
#> through from the configspec to generated config files) 
#----------------------------------------------------------------------- 
 
# A temporary directory is needed by a few cylc commands. Leave it unset 
# to get the default system temporary directory (usually $TMPDIR). 
# Cylc temporary directories are automatically cleaned up on exit. 
temporary directory = string( default=None ) 
 
# A rolling archive of suite state dumps is maintained for restart use. 
state dump rolling archive length = integer( min=1, default=10 ) 
 
# Task messaging settings apply to the "cylc task COMMAND" commands 
# used by running tasks to communicate with their parent suite. If a 
# message send fails after the configured number of tries the task will 
# carry on regardless. 
[task messaging] 
    retry interval in seconds = float( min=1, default=30 ) 
    maximum number of tries = integer( min=1, default=10 ) 
    # This timeout is the same as --pyro-timeout for user commands. 
    # If set to None (no timeout) a non-responsive suite (e.g. suspended 
    # with Ctrl-Z) could cause a task to hang indefinitely when it 
    # attempts to send a message to the suite. 
    connection timeout in seconds = float( min=1, default=None ) 
 
# suites logs go under the suite run directory (see below) 
[suite logging] 
    roll over at start-up = boolean( default=True ) 
    rolling archive length = integer( min=1, default=5 ) 
    maximum size in bytes = integer( min=1000, default=1000000 ) 
 
# The "cylc doc" command and GUI Help menus need the following items. 
[documentation] 
    # Documentation files that come with the cylc release tarball. 
# SITE ONLY 
    [[files]] 
        html index = string( default="$CYLC_DIR/doc/index.html" ) 
        pdf user guide = string( default="$CYLC_DIR/doc/pdf/cug-pdf.pdf" ) 
        multi-page html user guide = string( default="$CYLC_DIR/doc/html/multi/cug-html.html" ) 
        single-page html user guide = string( default="$CYLC_DIR/doc/html/single/cug-html.html" ) 
    # Documentation URLs: 
    [[urls]] 
        # The cylc homepage links to documentation for the latest release. 
# SITE ONLY 
        internet homepage = string( default="http://cylc.github.com/cylc/" ) 
        # You may want to copy the docs for access via a local web server. 
        local index = string( default=None ) 
 
# PDF and HTML viewers can be launched by cylc to view documentation. 
[document viewers] 
    pdf = string( default="evince" ) 
    html = string( default="firefox" ) 
 
# Configure your favourite text editor for editing suite definitions. 
[editors] 
    # Examples: 
    #  + vim           # vim in-terminal 
    #  + gvim -f       # (-f is required for "cylc edit --inline") 
    #  + xterm -e vim  # in-terminal as a proxy for a GUI editor 
    #  + emacs         # emacs GUI 
    #  + emacs -nw     # emacs in-terminal 
    in-terminal = string( default="vim" ) 
    gui         = string( default="gvim -f" ) 
 
# Pyro is used by cylc for network communications. 
[pyro] 
    # Each suite listens on a dedicated network port. 
    # Servers bind on the first port available from the base port up: 
# SITE ONLY 
    base port = integer( default=7766 ) 
    # This sets the maximum number of suites that can run at once. 
# SITE ONLY 
    maximum number of ports = integer( default=100 ) 
    # Port numbers are recorded in this directory, by suite name. 
    ports directory = string( default="$HOME/.cylc/ports/" ) 
 
# The [task hosts] section configures items needed to run tasks on 
# specific hosts at your site, including 'local' for the suite host. 
# The local sub-section also provides default values for directory paths 
# on remote task hosts - the local home directory path, if present, will 
# be replaced with literal '$HOME' for evaluation on the remote host. 
# A remote host entry can be empty (i.e. just the sub-section heading) 
# or missing (i.e. no entry for a requested host) in which case the 
# local defaults will be used with $HOME replaced. 
[task hosts] 
    # The default task host is the suite host, called 'local' here: 
    [[local]] 
        # Run directory: 
        #   For suite event log, and suite stdout and stderr logs: 
        #       <VALUE>/<suite-name>/log/suite/ 
        #   and suite state dump files: 
        #       <VALUE>/<suite-name>/state/ 
        #   and task job scripts and stdout and stderr logs: 
        #       <VALUE>/<suite-name>/log/job/ 
        #   If not set the local path will be used with your home 
        #   directory path swapped for '$HOME' (to be evaluated on host) 
        run directory = string( default="$HOME/cylc-run" ) 
        # Workspace directory: 
        #   For the suite share directory, a common workspace made 
        #   available to all tasks via $CYLC_SUITE_SHARE_PATH: 
        #       <VALUE>/<suite-name>/share/ 
        #   and task work directories, from within which task job 
        #   scripts are executed: 
        #       <VALUE>/<suite-name>/work/<task-id> 
        #   This can be distinct from the run directory tree because of 
        #   the potential for much a greater storage requirement. 
        #   If not set for remote task hosts: same as for run directory. 
        workspace directory = string( default="$HOME/cylc-run" ) 
 
        # THE FOLLOWING THREE ITEMS ARE NOT USED FOR THE LOCAL HOST 
        # unless you run tasks under other local user accounts, but 
        # they can still provide default settings for remote hosts. 
        # Cylc location on the host, leave unset if cylc is in $PATH: 
        cylc directory = string( default=None ) 
        # Re-invoke task messaging commands on the suite host 
        # instead of using Pyro-based RPC across the network: 
        use ssh messaging = boolean( default=False ) 
        # How to invoke commands on this host; default shown: 
        remote shell template = string( default='ssh -oBatchMode=yes %s' ) 
        # Use a login shell or not for remote command invocation.  By 
        # default Cylc will submit remote ssh commands using a login 
        # shell. For security reasons some institutions do not allow 
        # unattended commands to start login shells, setting this item 
        # to false will disable that behaviour.  When this option is set 
        # to True Cylc will start a Bash login shell to run remote ssh 
        # commands, e.g. ssh user@host 'bash --login cylc ...' which 
        # will source the files /etc/profile and ~/.profile in order to 
        # set up the user environment. Without the login option Cylc 
        # will be run directly by ssh, e.g. ssh user@host 'cylc ...' 
        # which will use the default shell on the remote machine. In 
        # this case the environment will be set up by sourcing the files 
        # ~/.bashrc or ~/.cshrc, depending on the shell type of the 
        # remote machine.  In either case the PATH environment variable 
        # on the remote machine should include $CYLC_DIR/bin in order 
        # for the Cylc executable to be found. 
        use login shell = boolean( default=True ) 
 
    #> Here's the __many__ configspec for available remote task hosts: 
    [[__many__]] 
        run directory = string( default=None ) 
        workspace directory = string( default=None) 
        cylc directory = string( default=None ) 
        use ssh messaging = boolean( default=None ) 
        remote shell template = string( default=None ) 
        use login shell = boolean( default=True ) 
 
# SUITE HOST SELF-IDENTIFICATION: The suite host's identity, by NAME or 
# IP ADDRESS, must be determined locally by cylc and passed to task 
# execution environments as $CYLC_SUITE_HOST so that tasks can send 
# messages back.  If name is used, the host name determined on the suite 
# host must resolve, on the task host, to the external IP address of the 
# suite host. Otherwise the external IP address of the suite host, as 
# seen by the task host, must be determined on the suite host, which is 
# not always easy to do.  Cylc requires a special "target address" to do 
# this; see documentation in $CYLC_DIR/lib/cylc/suite_host.py for why. 
# (TO DO: is it conceivable that different remote task hosts at the same 
# site might see the suite host differently? If so we would need to be 
# able to override the target in suite definitions.) 
[suite host self-identification] 
    # Method: "name", "address", or "hardwired" 
    method = option( "name", "address", "hardwired", default="name" ) 
    # Target: if your suite host sees the internet a common address such 
    # as 'google.com' will do; otherwise choose a host on your intranet. 
    target = string( default="google.com" ) 
    # For the hardwired method, put the host name or IP address here: 
    host = string( default=None )

C Command Reference

 C.1 Command Categories
 C.2 Commands
 
 
Cylc ("silk") is a suite engine and metascheduler that specializes in 
cycling weather and climate forecasting suites and related processing 
(but it can also be used for one-off workflows of non-cycling tasks). 
For detailed documentation see the Cylc User Guide (cylc doc --help). 
 
Version 5.1.0-3-g48e04 
 
Cylc also has a comprehensive Graphical User Interface: 
   "gcylc" (a.k.a. "cylc gui") - work on and run a specific suite. 
   "cylc dbviewer" - to view registered suites; right-click to act. 
 
USAGE: 
  % cylc -v,--version                   # print cylc version 
  % cylc help,--help,-h,?               # print this help page 
 
  % cylc help CATEGORY                  # print help by category 
  % cylc CATEGORY help                  # (ditto) 
 
  % cylc help [CATEGORY] COMMAND        # print command help 
  % cylc [CATEGORY] COMMAND help,--help # (ditto) 
 
  % cylc [CATEGORY] COMMAND [options] SUITE [arguments] 
  % cylc [CATEGORY] COMMAND [options] SUITE TASK [arguments] 
 
Commands and categories can both be abbreviated. Use of categories is 
optional, but they organize help and disambiguate abbreviated commands: 
  % cylc control trigger SUITE TASK     # trigger TASK in SUITE 
  % cylc trigger SUITE TASK             # ditto 
  % cylc con trig SUITE TASK            # ditto 
  % cylc c t SUITE TASK                 # ditto 
 
CYLC SUITE NAMES AND YOUR REGISTRATION DATABASE 
  Suites are addressed by hierarchical names such as suite1, nwp.oper, 
nwp.test.LAM2, etc. in a "registration database" ($HOME/.cylc/DB) that 
simply associates names with the suite definition locations.  The 
'--db=' command option can be used to view and copy suites from other 
users, with access governed by normal filesystem permissions. 
 
TASK IDENTIFICATION IN CYLC SUITES 
  Tasks are identified by NAME.TAG where for cycling tasks TAG is a 
cycle time (YYYY[MM[DD[HH[mm[ss]]]]]) and for asynchronous tasks TAG is 
an integer (just '1' for one-off asynchronous tasks). 
 
HOW TO DRILL DOWN TO COMMAND USAGE HELP: 
  % cylc help           # list all available categories (this page) 
  % cylc help prep      # list commands in category 'preparation' 
  % cylc help prep edit # command usage help for 'cylc [prep] edit' 
 
Command CATEGORIES: 
  all ........... The complete command set. 
  db|database ... Suite registration, copying, deletion, etc. 
  preparation ... Suite editing, validation, visualization, etc. 
  information ... Interrogate suite definitions and running suites. 
  discovery ..... Detect running suites. 
  control ....... Suite start up, monitoring, and control. 
  utility ....... Cycle arithmetic and templating, housekeeping, etc. 
  task .......... The task messaging interface. 
  hook .......... Suite and task event hook scripts. 
  admin ......... Cylc installation, testing, and example suites. 
  license|GPL ... Software licensing information (GPL v3.0).

C.1 Command Categories

C.1.1 admin
 
CATEGORY: admin - Cylc installation, testing, and example suites. 
 
HELP: cylc [admin] COMMAND help,--help 
  You can abbreviate admin and COMMAND. 
  The category admin may be omitted. 
 
COMMANDS: 
  check-examples .... Check all example suites validate 
  import-examples ... Import example suites your user database 
  test-battery ...... Run a battery of self-diagnosing test suites 
  test-db ........... Run an automated suite database test

C.1.2 all
 
CATEGORY: all - The complete command set. 
 
HELP: cylc [all] COMMAND help,--help 
  You can abbreviate all and COMMAND. 
  The category all may be omitted. 
 
COMMANDS: 
  alias ...................... Register an alternative name for a suite 
  broadcast|bcast ............ Change suite [runtime] settings on the fly 
  cat-log|log ................ Print filtered suite logs 
  cat-state .................. Print the state of tasks from the state dump 
  check-examples ............. Check all example suites validate 
  checkvars .................. Check required environment variables en masse 
  conditions ................. Print the GNU General Public License v3.0 
  copy|cp .................... Copy a suite or a group of suites 
  cycletime .................. Cycle time arithmetic and filename templating 
  dbviewer ................... GUI to view registered suites and operate on them. 
  depend ..................... Add prerequisites to tasks in a running suite 
  diff|compare ............... Compare two suite definitions and print differences 
  documentation|browse ....... Display cylc documentation (User Guide etc.) 
  dump ....................... Print the state of tasks in a running suite 
  edit ....................... Edit suite definitions, optionally inlined 
  failed|task-failed ......... Release task lock and report failure 
  get-config ................. Parse a suite and report configuration values 
  get-directory .............. Retrieve suite definition directory paths 
  get-global-config .......... print or generate site and user config 
  graph ...................... Plot suite dependency graphs and runtime hierarchies 
  gui ........................ (a.k.a. gcylc) cylc GUI for suite control etc. 
  hold ....................... Hold (pause) suites or individual tasks 
  housekeeping ............... Parallel archiving and cleanup on cycle time offsets 
  import-examples ............ Import example suites your user database 
  insert ..................... Insert tasks into a running suite 
  jobscript .................. Generate a task job script and print it to stdout 
  list|ls .................... Print suite tasks and runtime hierarchies 
  lockclient|lc .............. Manual suite and task lock management 
  lockserver ................. The cylc lockserver daemon 
  message|task-message ....... Report progress and completion of outputs 
  monitor .................... An in-terminal suite monitor (see also gcylc) 
  nudge ...................... Cause the cylc task processing loop to be invoked 
  ping ....................... Check that a suite is running 
  print ...................... Print registered suites 
  purge ...................... Remove task trees from a running suite 
  random|rnd ................. Generate a random integer within a given range 
  refresh .................... Report invalid registrations and update suite titles 
  register ................... Register a suite for use 
  release|unhold ............. Release (unpause) suites or individual tasks 
  reload ..................... Reload the suite definition at run time 
  remove|kill ................ Remove tasks from a running suite 
  reregister|rename .......... Change the name of a suite 
  reset ...................... Manually set tasks to the waiting, ready, or succeeded states 
  restart .................... Restart a suite from a previous state 
  run|start .................. Start a suite at a given cycle time 
  scan ....................... Scan a host for running suites and lockservers 
  scp-transfer ............... Scp-based file transfer for cylc suites 
  search|grep ................ Search in suite definitions 
  set-runahead ............... Change the runahead limit in a running suite. 
  set-verbosity .............. Change a running suite's logging verbosity 
  show ....................... Print task state (prerequisites and outputs etc.) 
  started|task-started ....... Acquire a task lock and report started 
  stop|shutdown .............. Shut down running suites 
  submit|single .............. Run a single task just as its parent suite would 
  succeeded|task-succeeded ... Release task lock and report succeeded 
  suite-state ................ Query the task states in a suite 
  test-battery ............... Run a battery of self-diagnosing test suites 
  test-db .................... Run an automated suite database test 
  trigger .................... Manually trigger or re-trigger a task 
  unregister ................. Unregister and optionally delete suites 
  validate ................... Parse and validate suite definitions 
  view ....................... View suite definitions, inlined and Jinja2 processed 
  warranty ................... Print the GPLv3 disclaimer of warranty

C.1.3 control
 
CATEGORY: control - Suite start up, monitoring, and control. 
 
HELP: cylc [control] COMMAND help,--help 
  You can abbreviate control and COMMAND. 
  The category control may be omitted. 
 
COMMANDS: 
  broadcast|bcast ... Change suite [runtime] settings on the fly 
  depend ............ Add prerequisites to tasks in a running suite 
  gui ............... (a.k.a. gcylc) cylc GUI for suite control etc. 
  hold .............. Hold (pause) suites or individual tasks 
  insert ............ Insert tasks into a running suite 
  nudge ............. Cause the cylc task processing loop to be invoked 
  purge ............. Remove task trees from a running suite 
  release|unhold .... Release (unpause) suites or individual tasks 
  reload ............ Reload the suite definition at run time 
  remove|kill ....... Remove tasks from a running suite 
  reset ............. Manually set tasks to the waiting, ready, or succeeded states 
  restart ........... Restart a suite from a previous state 
  run|start ......... Start a suite at a given cycle time 
  set-runahead ...... Change the runahead limit in a running suite. 
  set-verbosity ..... Change a running suite's logging verbosity 
  stop|shutdown ..... Shut down running suites 
  trigger ........... Manually trigger or re-trigger a task

C.1.4 database
 
CATEGORY: db|database - Suite registration, copying, deletion, etc. 
 
HELP: cylc [db|database] COMMAND help,--help 
  You can abbreviate db|database and COMMAND. 
  The category db|database may be omitted. 
 
COMMANDS: 
  alias ............... Register an alternative name for a suite 
  copy|cp ............. Copy a suite or a group of suites 
  dbviewer ............ GUI to view registered suites and operate on them. 
  get-directory ....... Retrieve suite definition directory paths 
  print ............... Print registered suites 
  refresh ............. Report invalid registrations and update suite titles 
  register ............ Register a suite for use 
  reregister|rename ... Change the name of a suite 
  unregister .......... Unregister and optionally delete suites

C.1.5 discovery
 
CATEGORY: discovery - Detect running suites. 
 
HELP: cylc [discovery] COMMAND help,--help 
  You can abbreviate discovery and COMMAND. 
  The category discovery may be omitted. 
 
COMMANDS: 
  ping ... Check that a suite is running 
  scan ... Scan a host for running suites and lockservers

C.1.6 hook
 
CATEGORY: hook - Suite and task event hook scripts. 
 
HELP: cylc [hook] COMMAND help,--help 
  You can abbreviate hook and COMMAND. 
  The category hook may be omitted. 
 
COMMANDS: 
  check-triggering ... A suite shutdown event hook for cylc testing 
  email-suite ........ A suite event hook script that sends email alerts 
  email-task ......... A task event hook script that sends email alerts

C.1.7 information
 
CATEGORY: information - Interrogate suite definitions and running suites. 
 
HELP: cylc [information] COMMAND help,--help 
  You can abbreviate information and COMMAND. 
  The category information may be omitted. 
 
COMMANDS: 
  cat-log|log ............ Print filtered suite logs 
  cat-state .............. Print the state of tasks from the state dump 
  documentation|browse ... Display cylc documentation (User Guide etc.) 
  dump ................... Print the state of tasks in a running suite 
  get-config ............. Parse a suite and report configuration values 
  get-global-config ...... print or generate site and user config 
  gui|gcylc .............. (a.k.a. gcylc) cylc GUI for suite control etc. 
  list|ls ................ Print suite tasks and runtime hierarchies 
  monitor ................ An in-terminal suite monitor (see also gcylc) 
  show ................... Print task state (prerequisites and outputs etc.)

C.1.8 license
 
CATEGORY: license|GPL - Software licensing information (GPL v3.0). 
 
HELP: cylc [license|GPL] COMMAND help,--help 
  You can abbreviate license|GPL and COMMAND. 
  The category license|GPL may be omitted. 
 
COMMANDS: 
  conditions ... Print the GNU General Public License v3.0 
  warranty ..... Print the GPLv3 disclaimer of warranty

C.1.9 preparation
 
CATEGORY: preparation - Suite editing, validation, visualization, etc. 
 
HELP: cylc [preparation] COMMAND help,--help 
  You can abbreviate preparation and COMMAND. 
  The category preparation may be omitted. 
 
COMMANDS: 
  diff|compare ... Compare two suite definitions and print differences 
  edit ........... Edit suite definitions, optionally inlined 
  graph .......... Plot suite dependency graphs and runtime hierarchies 
  jobscript ...... Generate a task job script and print it to stdout 
  list|ls ........ Print suite tasks and runtime hierarchies 
  search|grep .... Search in suite definitions 
  validate ....... Parse and validate suite definitions 
  view ........... View suite definitions, inlined and Jinja2 processed

C.1.10 task
 
CATEGORY: task - The task messaging interface. 
 
HELP: cylc [task] COMMAND help,--help 
  You can abbreviate task and COMMAND. 
  The category task may be omitted. 
 
COMMANDS: 
  failed|task-failed ......... Release task lock and report failure 
  message|task-message ....... Report progress and completion of outputs 
  started|task-started ....... Acquire a task lock and report started 
  submit|single .............. Run a single task just as its parent suite would 
  succeeded|task-succeeded ... Release task lock and report succeeded

C.1.11 utility
 
CATEGORY: utility - Cycle arithmetic and templating, housekeeping, etc. 
 
HELP: cylc [utility] COMMAND help,--help 
  You can abbreviate utility and COMMAND. 
  The category utility may be omitted. 
 
COMMANDS: 
  checkvars ....... Check required environment variables en masse 
  cycletime ....... Cycle time arithmetic and filename templating 
  housekeeping .... Parallel archiving and cleanup on cycle time offsets 
  lockclient|lc ... Manual suite and task lock management 
  lockserver ...... The cylc lockserver daemon 
  random|rnd ...... Generate a random integer within a given range 
  scp-transfer .... Scp-based file transfer for cylc suites 
  suite-state ..... Query the task states in a suite

C.2 Commands

C.2.1 alias
 
Usage: cylc [db] alias [OPTIONS] REG1 REG2 
 
Register an alias REG2 for suite REG1. Using an alias is equivalent to 
using the full suite name, except for the following caveat: aliases are 
stored in your local suite db and aliased suites run under their full 
name; therefore you can't interact with remote suites via an alias 
unless you use '--use-ssh' (for [control] category commands), which 
re-invokes the control command on the remote suite host (where the alias 
is known). 
 
  $ cylc alias global.ensemble.parallel.test3 bob 
  $ cylc edit bob 
  $ cylc run  bob 
  $ cylc show bob # etc. 
 
Arguments: 
   REG1               Target suite name 
   REG2               An alias for REG1 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line.

C.2.2 broadcast
 
Usage: cylc [control] broadcast|bcast [OPTIONS] REG 
 
This command overrides [runtime] namespace settings in a running suite. 
 
For settings affected by multiple broadcasts with respect to cycle time 
and/or namespace, the precedence is as follows: 
 1) specific cycles take precedence over all-cycle broadcasts; then 
 2) the most specific namespace (farthest from root) takes precedence. 
 
Broadcast settings persist across suite restarts. 
 
Items with internal spaces must be quoted, e.g.: 
  % cylc broadcast -s "[environment]VERSE = the quick brown fox" REG 
 
To view current active broadcasts: 
  % cylc broadcast --display REG 
  % cylc broadcast --display-task=TASKID REG 
 
To unset active broadcast: 
  % cylc broadcast -n NAME -u 'command scripting' REG 
  % cylc broadcast --clear REG # clear all broadcast settings 
 
Broadcast settings are applied to tasks just before job submission. 
 
LIMITATIONS: broadcast cannot change the runtime inheritance hierarchy. 
 
See also 'cylc reload' - reload a modified suite definition at run time. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting. 
  -n NAME, --namespace=NAME 
                        Target runtime namespace, default root; can be used 
                        multiple times to target several namespaces at once. 
  -t TAG, --tag=TAG     Cycle time or integer tag: target tasks with just this 
                        tag; can be used multiple times to target several 
                        cycles. 
  -s [SEC]ITEM=VALUE, --set=[SEC]ITEM=VALUE 
                        Set a runtime item by broadcast. Can be used multiple 
                        times to broadcast several settings at once. 
  -u [SEC]ITEM, --unset=[SEC]ITEM 
                        Unset an active broadcast item. Can be used multiple 
                        times to broadcast several settings at once. 
  -c, --clear           clear all current broadcast settings. 
  -d, --display         Display current active broadcast settings. 
  -k TASKID, --display-task=TASKID 
                        Print current active broadcast for a particular task 
                        (NAME.TAG). 
  -b, --box             Use unicode box characters with the show options. 
  -r, --raw             With -s|--show, print in raw Python format

C.2.3 cat-log
 
Usage: cylc [info] cat-log|log [OPTIONS] REG 
Print and filter cylc suite (not task) log files. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -l, --location        Print the suite log file location and exit. 
  -t TASK, --task=TASK  Filter the log for messages from a specific task 
  -f RE, --filter=RE    Filter the log with a Python-style regular expression 
                        e.g. '\[(foo|bar).⋆(started|succeeded)' 
  -r INT, --rotation=INT 
                        Rotation number (to view older, rotated logs) 
  -o, --stdout          Print the suite stdout log (the default is the suite 
                        event log). 
  -e, --stderr          Print the suite stderr log (the default is the suite 
                        event log).

C.2.4 cat-state
 
Usage: cylc [info] cat-state [OPTIONS] REG 
 
Print the suite state dump file directly to stdout. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -d, --dump            Use the same display format as the 'cylc dump' 
                        command.

C.2.5 check-examples
 
 
USAGE: cylc [admin] check-examples 
 
Check that all cylc example suites validate successfully.

C.2.6 check-triggering
 
USAGE: cylc [hook] check-triggering ARGS 
 
This is a cylc shutdown event handler that compares the newly generated 
suite log with a previously generated reference log "reference.log" 
stored in the suite definition directory. Currently it just compares 
runtime triggering information, disregarding event order and timing, and 
fails the suite if there is any difference. This should be sufficient to 
verify correct scheduling of any suite that is not affected by different 
run-to-run conditional triggering. 
 
1) run your suite with "cylc run --generate-reference-log" to generate 
the reference log with resolved triggering information. Check manually 
that the reference run was correct. 
2) run reference tests with "cylc run --reference-test" - this 
automatically sets the shutdown event handler along with a suite timeout 
and "abort if shutdown handler fails", "abort on timeout", and "abort if 
any task fails". 
 
Reference tests can use any run mode: 
  simulation mode - tests that scheduling is equivalent to the reference 
  dummy mode - also tests that task hosting, job submission, job script 
   evaluation, and cylc messaging are not broken. 
  live mode - tests everything (but takes longer with real tasks!) 
 
 If any task fails, or if cylc itself fails, or if triggering is not 
 equivalent to the reference run, the test will abort with non-zero exit 
 status - so reference tests can be used as automated tests to check 
 that changes to cylc have not broken your suites.

C.2.7 checkvars
 
Usage: cylc [util] checkvars [OPTIONS] VARNAMES 
 
Check that each member of a list of environment variables is defined, 
and then optionally check their values according to the chosen 
commandline option. Note that THE VARIABLES MUST BE EXPORTED AS THIS 
SCRIPT NECESSARILY EXECUTES IN A SUBSHELL. 
 
All of the input variables are checked in turn and the results printed. 
If any problems are found then, depending on use of '-w,--warn-only', 
this script either aborts with exit status 1 (error) or emits a stern 
warning and exits with status 0 (success). 
 
Arguments: 
   VARNAMES     Space-separated list of environment variable names. 
 
Options: 
  -h, --help            show this help message and exit 
  -d, --dirs-exist      Check that the variables refer to directories that 
                        exist. 
  -c, --create-dirs     Attempt to create the directories referred to by the 
                        variables, if they do not already exist. 
  -p, --create-parent-dirs 
                        Attempt to create the parent directories of files 
                        referred to by the variables, if they do not already 
                        exist. 
  -f, --files-exist     Check that the variables refer to files that exist. 
  -i, --int             Check that the variables refer to integer values. 
  -s, --silent          Do not print the result of each check. 
  -w, --warn-only       Print a warning instead of aborting with error status.

C.2.8 conditions
 
USAGE: cylc [license] warranty [--help] 
Cylc is release under the GNU General Public License v3.0 
This command prints the GPL v3.0 license in full. 
 
Options: 
  --help   Print this usage message.

C.2.9 copy
 
Usage: cylc [db] copy|cp [OPTIONS] REG REG2 TOPDIR 
 
Copy suite or group REG to TOPDIR, and register the copy as REG2. 
 
Consider the following three suites: 
 
% cylc db print '^foo'     # printed in flat form 
foo.bag     | "Test Suite Zero" | /home/bob/zero 
foo.bar.qux | "Test Suite Two"  | /home/bob/two 
foo.bar.baz | "Test Suite One"  | /home/bob/one 
 
% cylc db print -t '^foo'  # printed in tree from 
foo 
 |-bag    "Test Suite Zero" | /home/bob/zero 
 ‘-bar 
   |-baz  "Test Suite One"  | /home/bob/one 
   ‘-qux  "Test Suite Two"  | /home/bob/two 
 
These suites are stored in a flat directory structure under /home/bob, 
but they are organised in the suite database as a group 'foo' that 
contains the suite 'foo.bag' and a group 'foo.bar', which in turn 
contains the suites 'foo.bar.baz' and 'foo.bar.qux'. 
 
When you copy suites with this command, the target registration names 
are determined by TARGET and the name structure underneath SOURCE, and 
the suite definition directories are copied into a directory tree under 
TOPDIR whose structure reflects the target registration names. If this 
is not what you want, you can copy suite definition directories manually 
and then register the copied directories manually with 'cylc register'. 
 
To copy suites between different databases use one or both of the 
--db-to, --db-from options.  If only one is used the other database 
(source or target) will be the default database, which may in turn 
be specified with the plain --db option. 
 
EXAMPLES (using the three suites above): 
 
% cylc db copy foo.bar.baz red /home/bob       # suite to suite 
  Copying suite definition for red 
% cylc db print "^red" 
  red | "Test Suite One" | /home/bob/red 
 
% cylc copy foo.bar.baz blue.green /home/bob   # suite to group 
  Copying suite definition for blue.green 
% cylc db pr "^blue" 
  blue.green | "Test Suite One" | /home/bob/blue/green 
 
% cylc copy foo.bar orange /home/bob           # group to group 
  Copying suite definition for orange.qux 
  Copying suite definition for orange.baz 
% cylc db pr "^orange" 
  orange.qux | "Test Suite Two" | /home/bob/orange/qux 
  orange.baz | "Test Suite One" | /home/bob/orange/baz 
 
Arguments: 
   REG                  Source suite name 
   REG2                 Target suite name 
   TOPDIR               Top level target directory. 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --db-from=DB          Source suite database, specified as for --db. Use 
                        --db-to, or --db, or your default DB as the target. 
  --db-to=DB            Target suite database, specified as for --db. Use 
                        --db-from, or --db', or your default DB as the source.

C.2.10 cycletime
 
Usage: cylc [util] cycletime [OPTIONS] [CYCLE] 
 
Arithmetic cycle time offset computation, and filename templating. 
 
Examples: 
 
1) print offset from an explicit cycle time: 
  % cylc [util] cycletime --offset-hours=6 2010082318 
  2010082400 
 
2) print offset from $CYLC_TASK_CYCLE_TIME (as in suite tasks): 
  % export CYLC_TASK_CYCLE_TIME=2010082318 
  % cylc cycletime --offset-hours=-6 
  2010082312 
 
3) cycle time filename templating, explicit template: 
  % export CYLC_TASK_CYCLE_TIME=201008 
  % cylc cycletime --offset-years=2 --template=foo-YYYYMM.nc 
  foo-201208.nc 
 
4) cycle time filename templating, template in a variable: 
  % export CYLC_TASK_CYCLE_TIME=201008 
  % export MYTEMPLATE=foo-YYYYMM.nc 
  % cylc cycletime --offset-years=2 --template=MYTEMPLATE 
  foo-201208.nc 
 
Arguments: 
   [CYCLE]    YYYY[MM[DD[HH[mm[ss]]]]], default $CYLC_TASK_CYCLE_TIME 
 
Options: 
  -h, --help            show this help message and exit 
  --offset-hours=HOURS  Add N hours to CYCLE (may be negative) 
  --offset-days=DAYS    Add N days to CYCLE (N may be negative) 
  --offset-months=MONTHS 
                        Add N months to CYCLE (N may be negative) 
  --offset-years=YEARS  Add N years to CYCLE (N may be negative) 
  --template=TEMPLATE   Filename template string or variable 
  --print-year          Print only YYYY of result 
  --print-month         Print only MM of result 
  --print-day           Print only DD of result 
  --print-hour          Print only HH of result

C.2.11 dbviewer
 
Usage: cylc [db] dbviewer [OPTIONS] 
 
This command launches a GUI for viewing your database of registered 
suites. Right-click on suites or groups to operate on them (edit, copy, 
launch gcylc etc.). 
 
Arguments: 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --pyro-timeout=SEC    Set a timeout for the network connections used when 
                        scanning ports for running suites. The default is no 
                        timeout.

C.2.12 depend
 
Usage: cylc [control] depend [OPTIONS] REG TASK DEP 
 
Add new dependencies on the fly to tasks in running suite REG. If DEP 
is a task ID the target TASK will depend on that task finishing, 
otherwise DEP can be an explicit quoted message such as 
  "Data files uploaded for 2011080806" 
(presumably there will be another task in the suite, or you will insert 
one, that reports that message as an output). 
 
Prerequisites added on the fly are not propagated to the successors 
of TASK, and they will not persist in TASK across a suite restart. 
 
Arguments: 
   REG                Suite name 
   TASK               Target task 
   DEP                New dependency 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.13 diff
 
Usage: cylc [prep] diff|compare [OPTIONS] REG REG2 
 
Compare two suite definitions and display any differences. 
 
Differencing is done after parsing the suite.rc files so it takes 
account of default values that are not explicitly defined, it disregards 
the order of configuration items, and it sees any include-file content 
after inlining has occurred. 
 
Note that seemingly identical suites normally differ due to inherited 
default configuration values (e.g. the default job submission log 
directory. 
 
Files in the suite bin directory and other sub-directories of the 
suite definition directory are not currently differenced. 
 
Arguments: 
   REG1               Suite name 
   REG1               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -n, --nested          print suite.rc section headings in nested form.

C.2.14 documentation
 
Usage: cylc [info] documentation|browse [OPTIONS] 
 
By default this command opens the cylc documentation index in your 
browser in file:// mode. Alternatively it can open the PDF Cylc User 
Guide directly, or browse the cylc internet homepage, or - if your site 
has a web server with access to the cylc documentation - an intranet 
documentation URL. The browser and PDF reader to use, and the intranet 
URL, is determined by cylc site/user configuration - for details see 
  $ cylc get-global-config --help 
 
Options: 
  -h, --help      show this help message and exit 
  -p, --pdf       Open the PDF User Guide directly 
  -w, --internet  Browse the cylc internet homepage

C.2.15 dump
 
Usage: cylc [info] dump [OPTIONS] REG 
 
Print state information (e.g. the state of each task) from a running 
suite. For small suites 'watch cylc [info] dump SUITE' is an effective 
non-GUI real time monitor (but see also 'cylc monitor'). 
 
For more information about a specific task, such as the current state of 
its prerequisites and outputs, see 'cylc [info] show'. 
 
Examples: 
 Display the state of all running tasks, sorted by cycle time: 
 % cylc [info] dump --tasks --sort SUITE | grep running 
 
 Display the state of all tasks in a particular cycle: 
 % cylc [info] dump -t SUITE | grep 2010082406 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -g, --global          Global information only. 
  -t, --tasks           Task states only. 
  -s, --sort            Task states only; sort by cycle time instead of name.

C.2.16 edit
 
Usage: cylc [prep] edit [OPTIONS] REG 
 
Edit suite definitions without having to move to their directory 
locations, and with optional reversible inlining of include-files. Note 
that Jinja2 suites can only be edited in raw form but the processed 
version can be viewed with 'cylc [prep] view -p'. 
 
1/cylc [prep] edit REG 
Change to the suite definition directory and edit the suite.rc file. 
 
2/ cylc [prep] edit -i,--inline REG 
Edit the suite with include-files inlined between special markers. The 
original suite.rc file is temporarily replaced so that the inlined 
version is "live" during editing (i.e. you can run suites during 
editing and cylc will pick up changes to the suite definition). The 
inlined file is then split into its constituent include-files 
again when you exit the editor. Include-files can be nested or 
multiply-included; in the latter case only the first inclusion is 
inlined (this prevents conflicting changes made to the same file). 
 
3/ cylc [prep] edit --cleanup REG 
Remove backup files left by previous INLINED edit sessions. 
 
INLINED EDITING SAFETY: The suite.rc file and its include-files are 
automatically backed up prior to an inlined editing session. If the 
editor dies mid-session just invoke 'cylc edit -i' again to recover from 
the last saved inlined file. On exiting the editor, if any of the 
original include-files are found to have changed due to external 
intervention during editing you will be warned and the affected files 
will be written to new backups instead of overwriting the originals. 
Finally, the inlined suite.rc file is also backed up on exiting 
the editor, to allow recovery in case of accidental corruption of the 
include-file boundary markers in the inlined file. 
 
The edit process is spawned in the foreground as follows: 
  % <editor> suite.rc 
Where <editor> is defined in the cylc site and user config files 
($CYLC_DIR/conf/globals/cylc.rc and $HOME/.cylc/cylc.rc). 
 
See also 'cylc [prep] view'. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -i, --inline          Edit with include-files inlined as described above. 
  --cleanup             Remove backup files left by previous inlined edit 
                        sessions. 
  -g, --gui             Force use of the configured GUI editor.

C.2.17 email-suite
 
USAGE: cylc [hook] email-suite EVENT SUITE MESSAGE 
 
This is a simple suite event hook script that sends an email. 
The command line arguments are supplied automatically by cylc. 
 
For example, to get an email alert when a suite shuts down: 
 
# SUITE.RC 
[cylc] 
   [[environment]] 
      MAIL_ADDRESS = foo@bar.baz.waz 
   [[event hooks]] 
      events = shutdown 
      script = cylc email-suite 
 
See the Suite.rc Reference (Cylc User Guide) for more information 
on suite and task event hooks and event handler scripts.

C.2.18 email-task
 
USAGE: cylc [hook] email-task EVENT SUITE TASKID MESSAGE 
 
This is a simple task event hook handler script that sends an email. 
The command line arguments are supplied automatically by cylc. 
 
For example, to get an email alert whenever any task fails: 
 
# SUITE.RC 
[cylc] 
   [[environment]] 
      MAIL_ADDRESS = foo@bar.baz.waz 
[runtime] 
   [[root]] 
      [[[event hooks]]] 
         events = failed 
         script = cylc email-task 
 
See the Suite.rc Reference (Cylc User Guide) for more information 
on suite and task event hooks and event handler scripts.

C.2.19 failed
 
Usage: cylc [task] failed [OPTIONS] [REASON] 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
The failed command reports failure of task execution (and releases the 
task lock to the lockserver if necessary). It is automatically called in 
case of an error trapped by the task job script, but it can also be 
called explicitly for self-detected failures if necessary. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] message 
    cylc [task] started 
    cylc [task] succeeded 
 
Arguments: 
    REASON        - message explaining why the task failed. 
 
Options: 
  -h, --help     show this help message and exit 
  -v, --verbose  Verbose output mode.

C.2.20 get-config
 
Usage: cylc [info] get-config [OPTIONS] REG 
 
Print configuration settings parsed from a suite definition, after 
runtime inheritance processing and including default values for items 
that are not explicitly set in the suite.rc file. 
 
Config items containing spaces must be quoted on the command line. If 
a single item is requested only its value will be printed; otherwise the 
full nested structure below the requested config section is printed. 
 
Example, from a suite registered as foo.bar: 
|# SUITE.RC 
|[runtime] 
|    [[modelX]] 
|        [[[environment]]] 
|            FOO = foo 
|            BAR = bar 
 
$ cylc get-config --item=[runtime][modelX][environment]FOO foo.bar 
foo 
 
$ cylc get-config --item=[runtime][modelX][environment] foo.bar 
FOO = foo 
BAR = bar 
 
$ cylc get-config --item=[runtime][modelX] foo.bar 
... 
[[[environment]]] 
    FOO = foo 
    BAR = bar 
... 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -i [SEC...]ITEM, --item=[SEC...]ITEM 
                        Set a runtime item by broadcast. Can be used multiple 
                        times to broadcast several settings at once. 
  -t, --tasks           Print configured task list. 
  -m, --mark-up         Prefix output lines with '!cylc!' to aid in automated 
                        parsing (output can be contaminated by stdout from 
                        login scripts, for example, for remote invocation). 
  -p, --python          Write out the config data structure in Python native 
                        format. 
  --sparse              Only report [runtime] items  explicitly set in the 
                        suite.rc file, not underlying default settings. 
  -o, --one-line        Combine the result from multiple --item requests onto 
                        one line, with internal spaces replaced by the '⋆' 
                        character. For single-value items only. 
  -a, --all-tasks       For [runtime] items (e.g. --item='command scripting') 
                        report values for all tasks prefixed by task name.

C.2.21 get-directory
 
Usage: cylc [db] get-directory REG 
 
Retrieve and print the directory location of suite REG. 
Tip: here's how to move to a suite definition directory: 
  $ cd $(cylc get-dir REG). 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line.

C.2.22 get-global-config
 
Usage: cylc [admin] get-global-config [OPTIONS] 
 
Print settings determined by the cylc site and user configuration files, 
and auto-generate those files will all settings initially commented out. 
 
1) $CYLC_DIR/conf/siterc/cfgspec  # legal items and default values 
2) $CYLC_DIR/conf/siterc/site.rc  # site file (overrides defaults) 
3) $HOME/.cylc/user.rc            # user file (overrides site) 
 
Without options, this command prints all global settings to stdout. 
 
Options: 
  -h, --help        show this help message and exit 
  -s, --write-site  Write a site configuration file to 
                    $CYLC_DIR/conf/site/site.rc. Uncomment and modify items in 
                    the file as required. 
  -u, --write-user  Write a user configuration file to $HOME/.cylc/user.rc. 
                    Uncomment and modify items in the file as required.

C.2.23 graph
 
Usage: 1/ cylc [prep] graph [OPTIONS] REG [START [STOP]] 
     Plot the suite.rc dependency graph for REG. 
       2/ cylc [prep] graph [OPTIONS] -f,--file FILE 
     Plot the specified dot-language graph file. 
 
Plot cylc dependency graphs in a pannable, zoomable viewer. 
 
The viewer updates automatically when the suite.rc file is saved during 
editing. By default the full cold start graph is plotted; you can omit 
cold start tasks with the '-w,--warmstart' option.  Specify the optional 
initial and final cycle time arguments to override the suite.rc defaults. 
If you just override the intitial cycle, only that cycle will be plotted. 
 
GRAPH VIEWER CONTROLS: 
     Left-click to center the graph on a node. 
     Left-drag to pan the view. 
     Zoom buttons, mouse-wheel, or ctrl-left-drag to zoom in and out. 
     Shift-left-drag to zoom in on a box. 
     Also: "Best Fit" and "Normal Size". 
     Landscape mode on/off. 
  Family (namespace) grouping controls: 
    Toolbar: 
     "group" - group all families up to root. 
     "ungroup" - recursively ungroup all families. 
    Right-click menu: 
     "group" - close this node's parent family. 
     "ungroup" - open this family node. 
     "recursive ungroup" - ungroup all families below this node. 
 
Arguments: 
   [REG]                 Suite name 
   [START]               Initial cycle time to plot (default=2999010100) 
   [STOP]                Final cycle time to plot (default=2999010123) 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -w, --warmstart       Plot the mid-stream warm start (raw start) dependency 
                        graph (the default is cold start). 
  -n, --namespaces      Plot the suite namespace inheritance hierarchy (task 
                        run time properties). 
  -l, --landscape       Plot in landscape mode instead of portrait (the 
                        default).Cannot be used in conjunction with -f, --file 
  -f FILE, --file=FILE  View a specific dot-language graphfile. 
  -o FILE, --output=FILE 
                        Write out an image file, format determined by file 
                        extension. The file will be rewritten if the suite 
                        definition is changed while the viewer is running. 
                        Available formats depend on your graphviz build and 
                        may include png, jpg, gif, svg, pdf, ps, etc.

C.2.24 gui
 
Usage: cylc gui [OPTIONS] [REG] 
gcylc [OPTIONS] [REG] 
 
The cylc GUI for suite control etc. This program can also be launched by 
right-clicking on a suite in cylc dbviewer. 
 
If the '-t,--timeout=' option is used the timeout value will be passed 
on to the suite if it is subsequently started from the GUI 
(and it will in turn be passed to tasks submitted by the suite). 
 
Task state color themes can be changed via the View menu. To customize 
themes copy $CYLC_DIR/conf/gcylcrc/gcylc.rc.eg to $HOME/.cylc/gcylc.rc 
and follow the instructions in the file. 
 
Arguments: 
   [REG]               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -w VIEWS, --views=VIEWS 
                        Initial view panes for the suite control GUI. choose 
                        one or two, comma separated, from 'dot', 'text', and 
                        'graph'; the default is 'dot,text' 
  -u NAME, --use-theme=NAME 
                        The task state color and icon theme to use at start-up 
                        (this overrides the theme specified in your gcylc.rc 
                        file). 
  -l, --list-themes     Print available task state color themes (built-in 
                        themes and any in $HOME/.cylc/gcylc.rc).

C.2.25 hold
 
Usage: cylc [control] hold [OPTIONS] REG [TASK] 
 
Holding a suite stops it from submitting tasks that are ready to run, 
until it is released. Holding a waiting TASK prevents it from running 
until it is released. 
 
See also 'cylc [control] release'. 
 
Arguments: 
   REG                  Suite name 
   [TASK]               Task to hold (NAME.CYCLE) 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.26 housekeeping
 
Usage: 1/ cylc [util] housekeeping [OPTIONS] SOURCE MATCH OPER OFFSET [TARGET] 
Usage: 2/ cylc [util] housekeeping [options] FILE 
 
Parallel archiving and cleanup of files or directories with names 
that contain a cycle time. Matched items are grouped into batches in 
which members are processed in parallel, by spawned sub-processes. 
Once all batch members have completed, the next batch is processed. 
 
OPERATE ('delete', 'move', or 'copy') on items (files or directories) 
matching a Python-style regular expression MATCH in directory SOURCE 
whose names contain a cycle time (as YYYYMMDDHH, or YYYYMMDD and HH 
separately) more than OFFSET (integer hours) earlier than a base cycle 
time (which can be $CYLC_TASK_CYCLE_TIME if called by a cylc task, or 
otherwise specified on the command line). 
 
FILE is a housekeeping config file containing one or more of lines of: 
 
   VARNAME=VALUE 
   # comment 
   SOURCE    MATCH    OPERATION   OFFSET   [TARGET] 
 
(example: $CYLC_DIR/conf/housekeeping.eg) 
 
MATCH must be a Python-style regular expression (NOT A SHELL GLOB 
EXPRESSION!) to match the names of items to be operated on AND to 
extract the cycle time from the names via one or two parenthesized 
sub-expressions - '(\d{10})' for YYYYMMDDHH, '(\d{8})' and '(\d{2})' 
for YYYYMMDD and HH in either order. Partial matching can be used 
(partial: 'foo-(\d{10})'; full: '^foo-(\d{10})$'). Any additional 
parenthesized sub-expressions, e.g. for either-or matching, MUST 
be of the (?:...) type to avoid creating a new match group. 
 
SOURCE and TARGET must be on the local filesystem and may contain 
environment varables such as $HOME or ${FOO} (e.g. as defined in the 
suite.rc file for suite housekeeping tasks). Variables defined in 
the housekeeping file itself can also be used, as above. 
 
TARGET may contain the strings YYYYMMDDHH, YYYY, MM, DD, HH; these 
will be replaced with the extracted cycle time for each matched item, 
e.g. $ARCHIVE/oper/YYYYMM/DD. 
 
If TARGET is specified for the 'delete' operation, matched items in 
SOURCE will not be deleted unless an identical item is found in 
TARGET. This can be used to check that important files have been 
successfully archived before deleting the originals. 
 
The 'move' and 'copy' operations are aborted if the TARGET/item already 
exists, but a warning is emitted if the source and target items are not 
identical. 
 
To implement a simple ROLLING ARCHIVE of cycle-time labelled files or 
directories: just use 'delete' with OFFSET set to the archive length. 
 
SAFE ARCHIVING: The 'move' operation is safe - it uses Python's 
shutils.move() which renames files on the local disk partition and 
otherwise copies before deleting the original. But for extra safety 
consider two-step archiving and cleanup: 
1/ copy files to archive, then 
2/ delete the originals only if identicals are found in the archive. 
 
Options: 
  -h, --help            show this help message and exit 
  --cycletime=YYYYMMDDHH 
                        Cycle time, defaults to $CYLC_TASK_CYCLE_TIME 
  --mode=MODE           Octal umask for creating new destination directories. 
                        E.g. 0775 for drwxrwxr-x 
  -o LIST, --only=LIST  Only action config file lines matching any member of a 
                        comma-separated list of regular expressions. 
  -e LIST, --except=LIST 
                        Only action config file lines NOT matching any member 
                        of a comma-separated list of regular expressions. 
  -v, --verbose         print the result of every action 
  -d, --debug           print item matching output. 
  -c, --cheapdiff       Assume source and target identical if the same size 
  -b INT, --batchsize=INT 
                        Batch size for parallel processing of matched files. 
                        Members of each batch (matched items) are processed in 
                        parallel; when a batch completes, the next batch 
                        starts. Defaults to a batch size of 1, i.e. sequential 
                        processing.

C.2.27 import-examples
 
 
USAGE: cylc [admin] import-examples TOPDIR 
 
Copy the cylc example suites to TOPDIR and register them for use. 
 
Arguments: 
   TOPDIR     Example suite destination directory: TOPDIR/examples/.

C.2.28 insert
 
Usage: cylc [control] insert [OPTIONS] REG TASK[.STOP] 
 
Insert a task into a running suite. Inserted tasks will spawn successors 
as normal unless they are 'one-off' tasks. 
See also 'cylc [task] submit', for running single tasks without the scheduler. 
 
Arguments: 
   REG                       Suite name 
   TASK[.STOP]               Task to insert (NAME.TAG)[.STOPTAG] 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.29 jobscript
 
 
USAGE: cylc [prep] jobscript [OPTIONS] REG TASK 
 
Generate a task job script and print it to stdout. 
 
Here's how to capture the script in the vim editor: 
  % cylc jobscript REG TASK | vim - 
Emacs unfortunately cannot read from stdin: 
  % cylc jobscript REG TASK > tmp.sh; emacs tmp.sh 
 
This command wraps 'cylc [control] submit --dry-run'. 
Other options (e.g. for suite host and owner) are passed 
through to the submit command. 
 
Options: 
  -h,--help   - print this usage message. 
 (see also 'cylc submit --help') 
 
Arguments: 
  REG         - Registered suite name. 
  TASK        - Task ID (NAME.TAG)

C.2.30 list
 
Usage: cylc [info|prep] list|ls [OPTIONS] REG 
 
Print a suite task list or runtime namespace tree. By default the 
runtime tree is printed as if for single-inheritance based on 
first parents; use '-m/--multi' to print a tree based on the full 
C3-linearized hierarchy (paths through the tree will show the full 
precedence order of ancestral namespaces). 
To graph the runtime namespace tree, see 'cylc graph'. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -t, --tree            Print the full runtime inheritance hierarchy. 
  -b, --box             (with -t,--tree) Use unicode box characters. 
  -m, --multi           (with -t,--tree) Base the tree on the full 
                        C3-linearized runtime hierarchy.

C.2.31 lockclient
 
Usage: cylc [util] lockclient|lc [OPTIONS] 
 
This is the command line client interface to the cylc lockserver daemon, 
for server interrogation and manual lock management. 
 
Use of the lockserver is optional (see suite.rc documentation) 
 
Manual lock acquisition is mainly for testing purposes, but manual 
release may be required to remove stale locks if a suite or task dies 
without cleaning up after itself. 
 
See also: 
    cylc lockserver 
 
Options: 
  -h, --help            show this help message and exit 
  --acquire-task=SUITE:TASK.CYCLE 
                        Acquire a task lock. 
  --release-task=SUITE:TASK.CYCLE 
                        Release a task lock. 
  --acquire-suite=SUITE 
                        Acquire an exclusive suite lock. 
  --acquire-suite-nonex=SUITE 
                        Acquire a non-exclusive suite lock. 
  --release-suite=SUITE 
                        Release a suite and associated task locks 
  -p, --print           Print all locks. 
  -l, --list            List all locks (same as -p). 
  -c, --clear           Release all locks. 
  -f, --filenames       Print lockserver PID, log, and state filenames. 
  -t SECONDS, --timeout=SECONDS 
                        Set a network connection timeout for Pyro.

C.2.32 lockserver
 
Usage: cylc [util] lockserver [-f CONFIG] ACTION 
 
The cylc lockserver daemon brokers suite and task locks for a single 
user. These locks are analogous to traditional lock files, but they work 
even for tasks that start and finish executing on different hosts. Suite 
locks prevent multiple instances of the same suite from running at the 
same time (even if registered under different names) unless the suite 
allows that. Task locks do the same for individual tasks (even if 
submitted outside of their suite using 'cylc submit'). 
 
The command line user interface for interrogating the daemon, and 
for manual lock management, is 'cylc lockclient'. 
 
Use of the lockserver is optional (see suite.rc documentation). 
 
The lockserver reads a config file that specifies the location of the 
daemon's process ID, state, and log files. The default config file 
is '$CYLC_DIR/conf/lockserver.conf'. You can specify an alternative 
config file on the command line, but then all subsequent interaction 
with the daemon via the lockclient command must also specify the same 
file (this is really only for testing purposes). The default process ID, 
state, and log files paths are relative to $HOME so this should be 
sufficient for all users. 
 
The state file records currently held locks and, if it exists at 
startup, is used to initialize the lockserver (i.e. suite and task locks 
are not lost if the lockserver is killed and restarted). All locking 
activitiy is recorded in the log file. 
 
Arguments: 
  ACTION   -  'start', 'stop', 'status', 'restart', or 'debug' 
               In debug mode the server does not daemonize so its 
               the stdout and stderr streams are not lost. 
 
Options: 
  -h, --help            show this help message and exit 
  -c CONFIGFILE, --config-file=CONFIGFILE 
                        Config file (default $CYLC_DIR/lockserver.conf

C.2.33 message
 
Usage: cylc [task] message [OPTIONS] MESSAGE 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] started 
    cylc [task] succeeded 
    cylc [task] failed 
 
Options: 
  -h, --help            show this help message and exit 
  -p PRIORITY           message priority: NORMAL, WARNING, or CRITICAL; 
                        default NORMAL. 
  --next-restart-completed 
                        Report next restart file(s) completed 
  --all-restart-outputs-completed 
                        Report all restart outputs completed at once. 
  --all-outputs-completed 
                        Report all internal outputs completed at once. 
  -v, --verbose         Verbose output mode.

C.2.34 monitor
 
Usage: cylc [info] monitor [OPTIONS] REG 
 
A terminal-based suite monitor that updates the current state of all 
tasks in real time. It is effective even for quite large suites if 
'--align' is not used. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -a, --align           Align columns by task name. This option is only useful 
                        for small suites.

C.2.35 nudge
 
Usage: cylc [control] nudge [OPTIONS] REG 
 
Cause the cylc task processing loop to be invoked in a running suite. 
 
This happens automatically when the state of any task changes such that 
task processing (dependency negotation etc.) is required, or if a 
clock-triggered task is ready to run. 
 
The main reason to use this command is to update the "estimated time till 
completion" intervals shown in the tree-view suite control GUI, during 
periods when nothing else is happening. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.36 ping
 
Usage: cylc [discover] ping [OPTIONS] REG [TASK] 
 
If suite REG (or task TASK in it) is running, exit (silently, unless 
-v,--verbose is specified); else print an error message and exit with 
error status. For tasks, success means the task proxy is currently in 
the 'running' state. 
 
Arguments: 
   REG                  Suite name 
   [TASK]               Task NAME.TAG (TAG is cycle time or integer) 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting. 
  --print-ports         Print the port range from the site config file 
                        ($CYLC_DIR/conf/globals/cylc.rc).

C.2.37 print
 
Usage: cylc [db] print [OPTIONS] [REGEX] 
 
Print suite database registrations. 
 
Note on result filtering: 
  (a) The filter patterns are Regular Expressions, not shell globs, so 
the general wildcard is '.⋆' (match zero or more of anything), NOT '⋆'. 
  (b) For printing purposes there is an implicit wildcard at the end of 
each pattern ('foo' is the same as 'foo.⋆'); use the string end marker 
to prevent this ('foo$' matches only literal 'foo'). 
 
Arguments: 
   [REGEX]               Suite name regular expression pattern 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -t, --tree            Print registrations in nested tree form. 
  -b, --box             Use unicode box drawing characters in tree views. 
  -a, --align           Align columns. 
  -x                    don't print suite definition directory paths. 
  -y                    Don't print suite titles. 
  --fail                Fail (exit 1) if no matching suites are found.

C.2.38 purge
 
Usage: cylc [control] purge [OPTIONS] REG TASK STOP 
 
Remove an entire tree of dependent tasks, over multiple cycles into the 
future, from a running suite. The purge top task will be forced to 
spawn and will then be removed, then so will every task that depends on 
it, and every task that depends on those, and so on until the given stop 
cycle time. 
 
WARNING: THIS COMMAND IS DANGEROUS but in case of disaster you can 
restart the suite from the automatic pre-purge state dump (the filename 
will be logged by cylc before the purge is actioned.) 
 
UNDERSTANDING HOW PURGE WORKS: cylc identifies tasks that depend on 
the top task, and then on its downstream dependents, and then on 
theirs, etc., by simulating what would happen if the top task were to 
trigger: it artificially sets the top task to the "succeeded" state 
then negotatiates dependencies and artificially sets any tasks whose 
prerequisites get satisfied to "succeeded"; then it negotiates 
dependencies again, and so on until the stop cycle is reached or nothing 
new triggers. Finally it marks "virtually triggered" tasks for removal. 
Consequently: 
  Dependent tasks will only be identified as such, and purged, if they 
   have already spawned into the top cycle - so let them catch up first. 
  You can't purge a tree of tasks that has already triggered, because 
   the algorithm relies on detecting new triggering. 
Note also the suite runahead limit must be large enough to bridge the 
purge gap or runahead-held tasks may prevent the purge completing fully. 
 
[development note: post cylc-3.0 we could potentially use the suite 
graph to determine downstream tasks to remove, without doing this 
internal triggering simulation.] 
 
Arguments: 
   REG                Suite name 
   TASK               Task (NAME.CYCLE) to start purge 
   STOP               Cycle (inclusive!) to stop purge 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.39 random
 
Usage: cylc [util] random A B 
 
Generate a random integer in the range [A,B). This is just a command 
interface to Python's random.randrange() function. 
 
Arguments: 
   A     start of the range interval (inclusive) 
   B     end of the random range (exclusive, so must be > A) 
 
Options: 
  -h, --help  show this help message and exit

C.2.40 refresh
 
Usage: cylc [db] refresh [OPTIONS] [REGEX] 
 
Check a suite database for invalid registrations (no suite definition 
directory or suite.rc file) and refresh suite titles in case they have 
changed since the suite was registered. Explicit wildcards must be 
used in the match pattern (e.g. 'f' will not match 'foo.bar' unless 
you use 'f.⋆'). 
 
Arguments: 
   [REGEX]               Suite name match pattern 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -u, --unregister      Automatically unregister invalid registrations.

C.2.41 register
 
Usage: cylc [db] register [OPTIONS] REG PATH 
 
Register the suite definition located in PATH as REG. 
 
Suite names are hierarchical, delimited by '.' (foo.bar.baz); they 
may contain letters, digits, underscore, and hyphens. Colons are not 
allowed because directory paths incorporating the suite name are 
sometimes needed in PATH variables. 
 
EXAMPLES: 
 
For suite definition directories /home/bob/(one,two,three,four): 
 
% cylc db reg bob         /home/bob/one 
% cylc db reg foo.bag     /home/bob/two 
% cylc db reg foo.bar.baz /home/bob/three 
% cylc db reg foo.bar.waz /home/bob/four 
 
% cylc db pr '^foo'             # print in flat form 
  bob         | "Test Suite One"   | /home/bob/one 
  foo.bag     | "Test Suite Two"   | /home/bob/two 
  foo.bar.baz | "Test Suite Four"  | /home/bob/three 
  foo.bar.waz | "Test Suite Three" | /home/bob/four 
 
% cylc db pr -t '^foo'          # print in tree form 
  bob        "Test Suite One"   | /home/bob/one 
  foo 
   |-bag     "Test Suite Two"   | /home/bob/two 
   ‘-bar 
     |-baz   "Test Suite Three" | /home/bob/three 
     ‘-waz   "Test Suite Four"  | /home/bob/four 
 
Arguments: 
   REG                Suite name 
   PATH               Suite definition directory 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line.

C.2.42 release
 
Usage: cylc [control] release|unhold [OPTIONS] REG [TASK] 
 
Release a suite or a single task from a hold, allowing it run as normal. 
 
Holding a suite stops it from submitting tasks that are ready to run, 
until it is released. Holding a waiting TASK in a suite prevents it 
from running or spawning successors, until it is released. 
 
See also 'cylc [control] hold'. 
 
Arguments: 
   REG                  Suite name 
   [TASK]               Task to release (NAME.CYCLE) 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.43 reload
 
Usage: cylc [control] reload [OPTIONS] REG 
 
Tell a suite to reload its definition at run time. All settings 
including task definitions, with the exception of suite log 
configuration, can be changed on reload. Note that defined tasks can be 
be added to or removed from a running suite with the 'cylc insert' and 
'cylc remove' commands, without reloading. This command also allows 
addition and removal of actual task definitions, and therefore insertion 
of tasks that were not defined at all when the suite started (you will 
still need to manually insert a particular instance of a newly defined 
task). Live task proxies that are orphaned by a reload (i.e. their task 
definitions have been removed) will be removed from the task pool if 
they have not started running yet. Changes to task definitions take 
effect immediately, unless a task is already running at reload time. 
 
If the suite was started with Jinja2 template variables set on the 
command line (cylc run --set FOO=bar REG) the same template settings 
apply to the reload (only changes to the suite.rc file itself are 
reloaded). 
 
If the modified suite definition does not parse, failure to reload will 
be reported but no harm will be done to the running suite. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.44 remove
 
Usage: cylc [control] remove|kill [OPTIONS] REG TARGET 
 
Remove a single task, or all tasks with a common TAG (cycle time or 
integer) from a running suite. 
 
Target tasks will be forced to spawn successors before being removed if 
they have not done so already, unless you use '--no-spawn'. 
 
Arguments: 
   REG                  Suite name 
   TARGET               NAME.TAG to remove a single task;CYCLE or INT to remove all tasks with the same tag. 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting. 
  --no-spawn            Do not spawn successors before removal.

C.2.45 reregister
 
Usage: cylc [db] reregister|rename [OPTIONS] REG1 REG2 
 
Change the name of a suite (or group of suites) from REG1 to REG2. 
Example: 
  cylc db rereg foo.bar.baz test.baz 
 
Arguments: 
   REG1               original name 
   REG2               new name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line.

C.2.46 reset
 
Usage: cylc [control] reset [OPTIONS] REG TASK 
 
Force a task's state to: 
 1/ 'ready' .... (default)   ...... all prerequisites satisfied (default) 
 2/ 'waiting' .. (--waiting) ...... prerequisites not satisfied yet 
 3/ 'succeeded'  (--succeeded) .... all outputs completed 
 4/ 'failed' ... (--failed) 
 Or: 
 5/ force it to spawn if it hasn't done so already (--spawn) 
 
Resetting a task to 'ready' will cause it to trigger immediately unless 
the suite is held, in which case the task will trigger when normal 
operation is resumed. 
 
Forcing a task to spawn a successor may be necessary in the case of a 
failed "sequential task" that cannot be re-run successfully after fixing 
the problem, because sequential tasks do not spawn until they succeed. 
Alternatively, you could force the failed task to the succeeded state, 
or insert a new instance into the suite at the next cycle time. 
 
Arguments: 
   REG                Suite name 
   TASK               Target task ID 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting. 
  --ready               Force task to the 'ready' state. 
  --waiting             Force task to the 'waiting' state. 
  --succeeded           Force task to 'succeeded' state. 
  --failed              Force task to 'failed' state. 
  --spawn               Force a task to spawn its successor if it hasn't 
                        already.

C.2.47 restart
 
Usage: cylc [control] restart [OPTIONS] REG [FILE] 
 
Restart a cylc suite from a previous recorded state (to start from 
scratch see the 'cylc run' command). 
 
Cylc suites run in daemon mode by default (without --debug) so it safe 
to log out from your terminal after starting a suite. 
 
The most recent previous state is loaded by default, but other states 
can be specified on the command line (e.g. cylc writes special state 
dumps and logs their filenames before actioning intervention commands). 
 
WARNING: for maximum flexibility, and to avoid automatic re-submission 
of tasks that may not need re-running, task proxies are now loaded with 
states exactly as recorded in the suite state dump file. This means 
that task proxies loaded in the 'submitted' and 'running' states will 
not reflect the actual states of their associated real tasks - unless 
they really are still running. You may need to do some manual state 
resetting or triggering according to your knowledge of what happened to 
the real tasks at or after suite shutdown. 
 
NOTE: suites can be (re)started on remote hosts or other user accounts 
if passwordless ssh is set up. The ssh tunnel will remain open to 
receive the suite stdout and stderr streams. To control the running 
suite from the local host requires the suite passphrase to be installed. 
Both /etc/profile and $HOME/.profile, if they exist, will be sourced on 
the remote host before starting the suite. 
 
Arguments: 
   REG                  Suite name 
   [FILE]               Optional state dump file, assumed to reside in the 
                        suite state dump directory unless an absolute path 
                        is given. Defaults to the most recent suite state. 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --ignore-final-cycle  Ignore the final cycle time in the state dump. If one 
                        isspecified in the suite definition it will be used, 
                        however. 
  --ignore-initial-cycle 
                        Ignore the initial cycle time in the state dump. If 
                        one is specified in the suite definition it will be 
                        used, however. In a restart this is only used to set 
                        $CYLC_SUITE_INITIAL_CYCLE_TIME. 
  --until=CYCLE         Shut down after all tasks have PASSED this cycle time. 
  --hold                Hold (don't run tasks) immediately on starting. 
  --hold-after=CYCLE    Hold (don't run tasks) AFTER this cycle time. 
  -m STRING, --mode=STRING 
                        Run mode: live, simulation, or dummy; default is live. 
  --reference-log       Generate a reference log for use in reference tests. 
  --reference-test      Do a test run against a previously generated reference 
                        log. 
  --from-gui            (do not use).

C.2.48 run
 
Usage: cylc [control] run|start [OPTIONS] REG [START] 
 
Start a suite running at a specified initial cycle time. 
(To restart a suite from a previous state, see 'cylc restart REG'). 
 
Cylc suites run in daemon mode by default (without --debug) so it safe 
to log out from your terminal after starting a suite. 
 
The following are all equivalent if no intercycle dependence exists: 
  1/ Cold start (default)    : use special cold-start tasks 
  2/ Warm start (-w,--warm)  : assume a previous cycle 
  3/ Raw  start (-r,--raw)   : assume nothing 
 
1/ COLD START -- at start up, insert designated cold-start tasks in the 
waiting state, to satisfy any initial dependence on a previous cycle. 
In task environments $CYLC_SUITE_INITIAL_CYCLE_TIME will be set 
to the initial cold start cycle time. 
 
2/ WARM START -- at start up, insert designated cold-start tasks in the 
succeeded state, to stand in for a previous cycle (from a previous run). 
In task environments $CYLC_SUITE_INITIAL_CYCLE_TIME will be set to None 
unless '--ict' is used, because a warm start is really an implicit 
restart that does not reference a previous suite state - instead it 
assumes that the previous cycle (for each task) completed entirely in a 
previous run. 
 
3/ RAW START -- do not insert cold-start tasks at all. 
 
In task environments, $CYLC_SUITE_FINAL_CYCLE_TIME is always set to the 
final cycle time if one is set (by suite.rc file or command line). The 
initial and final cycle time variables persists across suite restarts. 
 
NOTE: suites can be (re)started on remote hosts or other user accounts 
if passwordless ssh is set up. The ssh tunnel will remain open to 
receive the suite stdout and stderr streams. To control the running 
suite from the local host requires the suite passphrase to be installed. 
Both /etc/profile and $HOME/.profile, if they exist, will be sourced on 
the remote host before starting the suite. 
 
Arguments: 
   REG                   Suite name 
   [START]               Initial cycle time, optional if defined in the 
                        suite.rc file (in which case the command line 
                        takes priority and a suite.rc final cycle time 
                        will be ignored); not required if the 
                        suite contains no cycling tasks. 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -w, --warm            Warm start the suite 
  -r, --raw             Raw start the suite 
  --ict                 Set $CYLC_SUITE_INITIAL_CYCLE_TIME to the initial 
                        cycle time even in a warm start (as for cold starts). 
  --until=CYCLE         Shut down after all tasks have PASSED this cycle time. 
  --hold                Hold (don't run tasks) immediately on starting. 
  --hold-after=CYCLE    Hold (don't run tasks) AFTER this cycle time. 
  -m STRING, --mode=STRING 
                        Run mode: live, simulation, or dummy; default is live. 
  --reference-log       Generate a reference log for use in reference tests. 
  --reference-test      Do a test run against a previously generated reference 
                        log. 
  --from-gui            (do not use).

C.2.49 scan
 
Usage: cylc [discover] scan [OPTIONS] 
 
Detect (by port scanning) running cylc suites and lockservers, and 
print the results. By default only your own running suites will be 
printed.  With --verbose you will also get "Connection Denied" from 
running suites owned by others on the same host. 
 
Simple space-delimited output format for easy parsing: 
    SUITE OWNER HOST PORT 
Here's one way to parse 'cylc scan' output by shell script: 
________________________________________________________________ 
#!/bin/bash 
# parse suite, owner, host, port from 'cylc scan' output lines 
OFIS=$IFS 
IFS=$' 
'; for LINE in $( cylc scan ); do 
    # split on space and assign tokens to positional parameters: 
    IFS=$' '; set $LINE 
    echo "$1 - $2 - $3 - $4" 
done 
IFS=$OFIS 
---------------------------------------------------------------- 
 
 
Arguments: 
 
Options: 
  -h, --help          show this help message and exit 
  --owner=USER        User account name (defaults to $USER). 
  --host=HOST         Host name (defaults to localhost). 
  -v, --verbose       Verbose output mode. 
  --debug             Run suites in non-daemon mode, and show exception 
                      tracebacks. 
  --db=DB             Suite database: 'u:USERNAME' for another user's default 
                      database, or PATH to an explicit location. Defaults to 
                      $HOME/.cylc/DB. 
  --port=INT          Suite port number on the suite host. NOTE: this is 
                      retrieved automatically if passwordless ssh is 
                      configured to the suite host. 
  --use-ssh           Use ssh to re-invoke the command on the suite host. 
  --no-login          Do not use a login shell to run remote ssh commands. The 
                      default is to use a login shell. 
  --pyro-timeout=SEC  Set a timeout for network connections to the running 
                      suite. The default is no timeout. For task messaging 
                      connections see site/user config file documentation. 
  --print-ports       Print the port range from the site config file 
                      ($CYLC_DIR/conf/siterc/site.rc).

C.2.50 scp-transfer
 
Usage: cylc [util] scp-transfer [OPTIONS] 
 
An scp wrapper for transferring a list of files and/or directories 
at once. The source and target scp URLs can be local or remote (scp 
can transfer files between two remote hosts). Passwordless ssh must 
be configured appropriately. 
 
ENVIRONMENT VARIABLE INPUTS: 
$SRCE  - list of sources (files or directories) as scp URLs. 
$DEST  - parallel list of targets as scp URLs. 
The source and destination lists should be space-separated. 
 
We let scp determine the validity of source and target URLs. 
Target directories are created pre-copy if they don't exist. 
 
Options: 
 -v     - verbose: print scp stdout. 
 --help - print this usage message.

C.2.51 search
 
Usage: cylc [prep] search|grep [OPTIONS] REG PATTERN [PATTERN2...] 
 
Search for pattern matches in suite definitions and any files in the 
suite bin directory. Matches are reported by line number and suite 
section. An unquoted list of PATTERNs will be converted to an OR'd 
pattern. Note that the order of command line arguments conforms to 
normal cylc command usage (suite name first) not that of the grep 
command. 
 
Note that this command performs a text search on the suite definition, 
it does not search the data structure that results from parsing the 
suite definition - so it will not report implicit default settings. 
 
For case insenstive matching use '(?i)PATTERN'. 
 
Arguments: 
   REG                         Suite name 
   PATTERN                     Python-style regular expression 
   [PATTERN2...]               Additional search patterns 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -x                    Do not search in the suite bin directory

C.2.52 set-runahead
 
Usage: cylc [control] set-runahead [OPTIONS] REG [HOURS] 
 
Change the suite runahead limit in a running suite. This is the number of 
hours that the fastest task is allowed to get ahead of the slowest. If a 
task spawns beyond that limit it will be held back from running until the 
slowest tasks catch up enough. WARNING: if you omit HOURS no runahead 
limit will be set - DO NOT DO THIS for for any cycling suite that has 
no near stop cycle set and is not constrained by clock-triggered 
tasks. 
 
Arguments: 
   REG                   Suite name 
   [HOURS]               Runahead limit (default: no limit) 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.53 set-verbosity
 
Usage: cylc [control] set-verbosity [OPTIONS] REG LEVEL 
 
Change the logging priority level of a running suite.  Only messages at 
or above the chosen priority level will be logged; for example, if you 
choose 'warning', only warning, error, and critical messages will be 
logged. The 'info' level is appropriate under most circumstances. 
 
Arguments: 
   REG                 Suite name 
   LEVEL               debug, info, warning, error, or critical 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.54 show
 
Usage: cylc [info] show [OPTIONS] REG [NAME[.TAG]] 
 
Interrogate a running suite for its title and task list, task 
descriptions, current state of task prerequisites and outputs and, for 
clock-triggered tasks, whether or not the trigger time is up yet. 
 
Arguments: 
   REG                        Suite name 
   [NAME[.TAG]]               Task name or ID 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location)

C.2.55 started
 
Usage: cylc [task] started [OPTIONS] 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
The started command reports commencement of task execution (and it 
acquires a task lock from the lockserver if necessary). It is 
automatically written to the top of task job scripts by cylc and 
therefore does not need to be called explicitly by task scripting. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] message 
    cylc [task] succeeded 
    cylc [task] failed 
 
Options: 
  -h, --help     show this help message and exit 
  -v, --verbose  Verbose output mode.

C.2.56 stop
 
Usage: cylc [control] stop|shutdown [OPTIONS] REG [STOP] 
 
1/ Shut down a suite when all currently running tasks have finished. 
   No other tasks will be submitted to run in the meantime. 
 
2/ With [STOP], shut down a suite AFTER on of the following events: 
    a/ all tasks have passed the TAG STOP (cycle time or async tag) 
    b/ the clock time has reached STOP (YYYY/MM/DD-HH:mm) 
    c/ the task STOP (TASK.TAG) has finished 
 
3/ With [--now], shut down immediately, regardless of tasks still running. 
   WARNING: beware of orphaning tasks that are still running at shutdown; 
   these may need to be killed manually, and they will (by default) be 
   resubmitted if the suite is restarted. 
 
Arguments: 
   REG                  Suite name 
   [STOP]               a/ task TAG (cycle time or integer), or 
                        b/ YYYY/MM/DD-HH:mm (clock time), or 
                        c/ TASK (task ID). 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting. 
  --now                 Shut down immediately; see WARNING above.

C.2.57 submit
 
Usage: cylc [task] submit|single [OPTIONS] REG TASK 
 
Submit a single task to run exactly as it would be submitted by its 
parent suite, in terms of both execution environment and job submission 
method. This can be used as an easy way to run single tasks for any 
reason, but it is particularly useful during suite development. 
 
If the parent suite is running at the same time and it has acquired an 
exclusive suite lock (which means you cannot running multiple instances 
of the suite at once, even under different registrations) then the 
lockserver will let you 'submit' a task from the suite only under the 
same registration, and only if the task is not locked (i.e. only if 
the same task, NAME.TAG, is not currently running in the suite). 
 
Arguments: 
   REG                Suite name 
   TASK               Target task (NAME.TAG) 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -d, --dry-run         Generate the cylc task execution file for the task and 
                        report how it would be submitted to run. 
  --scheduler           (EXPERIMENTAL) tell the task to run as a scheduler 
                        task, i.e. to attempt to communicate with a task proxy 
                        in a running cylc suite (you probably do not want to 
                        do this).

C.2.58 succeeded
 
Usage: cylc [task] succeeded [OPTIONS] 
 
This command is part of the cylc task messaging interface, used by 
running tasks to communicate progress to their parent suite. 
 
The succeeded command reports successful completion of task execution 
(and releases the task lock to the lockserver if necessary). It is 
automatically written to the end of task jobs scripts by cylc, except in 
the case of detaching tasks (suite.rc: 'manual completion = True'), in 
which case it must be called explicitly by final task scripting. 
 
Suite and task identity are determined from the task execution 
environment supplied by the suite (or by the single task 'submit' 
command, in which case case the message is just printed to stdout). 
 
See also: 
    cylc [task] message 
    cylc [task] started 
    cylc [task] failed 
 
Options: 
  -h, --help     show this help message and exit 
  -v, --verbose  Verbose output mode.

C.2.59 suite-state
 
Usage: cylc suite-state REG [OPTIONS] 
 
Print task states retrieved from a suite database. Can be used to query if a 
task in a suite has reached a particular state by using the task, cycle and 
status options. 
 
Example usage: 
 
cylc suite-state REG --task=TASK --cycle=CYCLE --run-dir=CYLC-RUN --status=STATUS 
 
Returns 0 if task TASK at cycle CYCLE has reached status STATUS, 1 otherwise. 
 
The command can be run in polling mode by specifying a wait time in seconds 
using the --wait option. 
 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  -t TASK, --task=TASK  Specify a task to check the state of. 
  -c CYCLE, --cycle=CYCLE 
                        Specify the cycle to check task states for. 
  -d RUN_DIR, --run-dir=RUN_DIR 
                        Specify the run directory for the suite being queried. 
  -S STATUS, --status=STATUS 
                        Specify a particular status to check for. 
  -w WAIT, --wait=WAIT  Used to specify a time (in seconds) to wait until a 
                        task achieves a particular state before exiting. 
  -i INTERVAL, --interval=INTERVAL 
                        Specify a polling interval (in seconds) for use when 
                        in wait mode (default=5).

C.2.60 test-battery
 
USAGE: cylc [admin] test-battery [options] [TOPDIR] 
 
This command runs a battery of self-diagnosing test-suites. 
See documention of "Reference Tests" in the User Guide. 
Test batteries should be kept in a directory tree; use TOPDIR to 
target all the contained suites, or a sub-tree, or a single 
one of them. TOPDIR defaults to $CYLC_DIR/tests/, the location 
of the official cylc reference tests. 
 
Directory paths containing the word 'hidden' will be ignored. 
This can be used to hide sub-suites that are not intended run 
as standalone tests (this is how to handle tests that are 
supposed to cause a suite to fail: put them in a hidden sub-suite 
and have the calling task in the main suite check the result). 
 
Some of the official test suites submit test jobs to a task host 
and user account taken from the environment: 
  $CYLC_TEST_TASK_HOST 
  $CYLC_TEST_TASK_OWNER 
If these are not defined they default to localhost and $USER 
Passwordless ssh must be configured to the task host account 
(even if it is local). 
 
Options: 
  -h, --help   Print this help message and exit.

C.2.61 test-db
 
USAGE: cylc [admin] test-db [--help] 
A thorough test of suite registration database functionality. 
Options: 
  --help   Print this usage message.

C.2.62 trigger
 
Usage: cylc [control] trigger [OPTIONS] REG TASK 
 
Get a task to trigger immediately (unless the suite is paused, 
in which case it will trigger when normal operation is resumed). 
This is effected by setting the task to the 'ready' state (all 
prerequisites satisfied) and, for clock-triggered tasks, ignoring 
the designated trigger time. 
 
Arguments: 
   REG                Suite name 
   TASK               Target task 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  --port=INT            Suite port number on the suite host. NOTE: this is 
                        retrieved automatically if passwordless ssh is 
                        configured to the suite host. 
  --use-ssh             Use ssh to re-invoke the command on the suite host. 
  --no-login            Do not use a login shell to run remote ssh commands. 
                        The default is to use a login shell. 
  --pyro-timeout=SEC    Set a timeout for network connections to the running 
                        suite. The default is no timeout. For task messaging 
                        connections see site/user config file documentation. 
  -p FILE, --passphrase=FILE 
                        Suite passphrase file (if not in a default location) 
  -f, --force           Do not ask for confirmation before acting.

C.2.63 unregister
 
Usage: cylc [db] unregister [OPTIONS] REGEX 
 
Remove one or more suites from your suite database. The REGEX pattern 
must match whole suite names to avoid accidental de-registration of 
partial matches (e.g. 'bar.baz' will not match 'foo.bar.baz'). 
 
Associated suite definition directories will not be deleted unless the 
'-d,--delete' option is used. 
 
Arguments: 
   REGEX               Regular expression to match suite names. 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -d, --delete          Delete the suite definition directory too 
                        (!DANGEROUS!). 
  -f, --force           Don't ask for confirmation before deleting suite 
                        definitions. 
  --dry-run             Just show what I would do.

C.2.64 validate
 
Usage: cylc [prep] validate [OPTIONS] REG 
 
Validate a suite definition against the official specification 
files held in $CYLC_DIR/conf/suiterc/. 
 
If the suite definition uses include-files reported line numbers 
will correspond to the inlined version seen by the parser; use 
'cylc view -i,--inline SUITE' for comparison. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --strict              Fail any use of unsafe or experimental features. 
                        Currently this just means naked dummy tasks (tasks 
                        with no corresponding runtime section) as these may 
                        result from unintentional typographic errors in task 
                        names.

C.2.65 view
 
Usage: cylc [prep] view [OPTIONS] REG 
 
View a read-only temporary copy of suite NAME's suite.rc file, in your 
editor, after optional include-file inlining and Jinja2 preprocessing. 
 
The edit process is spawned in the foreground as follows: 
  % <editor> suite.rc 
Where <editor> is defined in the cylc site and user config files 
($CYLC_DIR/conf/siterc/site.rc and $HOME/.cylc/user.rc). 
 
For remote host or owner, the suite will be printed to stdout unless 
the '-g,--gui' flag is used to spawn a remote GUI edit session. 
 
See also 'cylc [prep] edit'. 
 
Arguments: 
   REG               Suite name 
 
Options: 
  -h, --help            show this help message and exit 
  --owner=USER          User account name (defaults to $USER). 
  --host=HOST           Host name (defaults to localhost). 
  -v, --verbose         Verbose output mode. 
  --debug               Run suites in non-daemon mode, and show exception 
                        tracebacks. 
  --db=DB               Suite database: 'u:USERNAME' for another user's 
                        default database, or PATH to an explicit location. 
                        Defaults to $HOME/.cylc/DB. 
  -s NAME=VALUE, --set=NAME=VALUE 
                        Set the value of a template variable in the suite 
                        definition; can be used multiple times to set multiple 
                        variables.  WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  --set-file=FILE       Set the value of template variables in the suite 
                        definition from a file containing NAME=VALUE pairs 
                        (one per line). WARNING: these settings do not persist 
                        across restarts, you have to set them again on the 
                        "cylc restart" command line. 
  -i, --inline          Inline any include-files. 
  -j, --jinja2          View the suite after Jinja2 template processing. This 
                        necessarily implies '-i' as well. 
  -m, --mark            (With '-i') Mark inclusions in the left margin. 
  -l, --label           (With '-i') Label file inclusions with the file name. 
                        Line numbers will not correspond to those reported by 
                        the parser. 
  --single              (With '-i') Inline only the first instances of any 
                        multiply-included files. Line numbers will not 
                        correspond to those reported by the parser. 
  -n, --nojoin          Do not join continuation lines (line numbers will not 
                        correspond to those reported by the parser). 
  -g, --gui             Force use of the configured GUI editor. 
  --stdout              Print the suite definition to stdout.

C.2.66 warranty
 
 
USAGE: cylc [license] warranty [--help] 
   Cylc is released under the GNU General Public License v3.0 
This command prints the GPL v3.0 disclaimer of warranty. 
Options: 
  --help   Print this usage message.

D The Cylc Lockserver

Each cylc user can optionally run his/her own lockserver to prevent accidental invocation of multiple instances of the same suite or task at the same time. The suite and task locks brokered by the lockserver are analogous to traditional lock files, but they work across a network, even for distributed suites containing tasks that start executing on one host and finish on another.

Accidental invocation of multiple instances of the same suite or task at the same time can have serious consequences, so use of the lockserver should be considered for important operational suites, but it may be considered an unnecessary complication for general less critical usage, so it is currently disabled by default.

To enable the lockserver:

# SUITE.RC 
use lockserver = True

The suite will now abort at start-up if it cannot connect to the lockserver. To start your lockserver daemon,

% cylc lockserver start

To check that it is running,

% cylc lockserver status

For detailed usage information,

% cylc lockserver --help

There is a command line client interface,

% cylc lockclient --help

for interrogating the lockserver and managing locks manually (e.g. releasing locks if a suite was killed before it could clean up after itself).

To watch suite locks being acquired and released as a suite runs,

% watch cylc lockclient --print

E The Suite Control GUI Graph View

The graph view in the gcylc GUI has the advantage that it shows the structure of a suite very clearly as it evolves. It works remarkably well even for very large suites (up to several hundred tasks or more) but because the graphviz engine does a new global layout every time the graph changes the layout is often not very stable. This may not be a solvable problem even in principle as it seems likely that making continual incremental changes to an existing graph without redoing the global layout would inevitably result in a horrible mess.

The following features of the graph view, however, help mitigate the the jumping layout problem:

F Cylc Project README File

 
#C: THIS FILE IS PART OF THE CYLC SUITE ENGINE. 
#C: Copyright (C) 2008-2013 Hilary Oliver, NIWA 
#C: 
#C: This program is free software: you can redistribute it and/or modify 
#C: it under the terms of the GNU General Public License as published by 
#C: the Free Software Foundation, either version 3 of the License, or 
#C: (at your option) any later version. 
#C: 
#C: This program is distributed in the hope that it will be useful, 
#C: but WITHOUT ANY WARRANTY; without even the implied warranty of 
#C: MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the 
#C: GNU General Public License for more details. 
#C: 
#C: You should have received a copy of the GNU General Public License 
#C: along with this program.  If not, see <http://www.gnu.org/licenses/>. 
 
This is the Cylc Suite Engine, version: cylc -v 
 
Access to cylc: 
  % export PATH=/path/to/cylc/bin:$PATH 
  % cylc help 
  % gcylc & 
 
Documentation: 
   Installation: /path/to/cylc/INSTALL 
   User Guide: /path/to/cylc/doc/index.html 
   Project Home Page: http://cylc.github.com/cylc 
 
Code Contributors (git shortlog -s -n): 
   Hilary Oliver 
   Ben Fitzpatrick 
   Matt Shin 
   Luis Kornblueh 
   Andrew Clark 
   Dave Matthews 
   Scott Wales

G Cylc Project INSTALL File

 
 
Cylc can run from a raw source tree (at a particular version) or a git 
repository clone (which can be updated to the latest version at will). 
In either case, cylc can be installed into a normal user home directory 
or a system location, so long as the full source tree remains intact. 
 
INSTALLING A SOURCE TARBALL: 
 
  % tar xzf cylc-x.y.z.tar.gz 
  % cd cylc-x.y.z 
  % make 
 
The make process does the following: 
 
  1) a VERSION file is created containing the cylc version string, e.g. 
  5.1.0. This is taken from the name of the parent directory; DO NOT 
  CHANGE THE NAME OF THE UNPACKED SOURCE TREE before running 'make'. 
 
  2) generates the Cylc User Guide from its LaTeX source files in doc/: 
    if you have pdflatex installed, a PDF version is generated, and 
    if you have tex4ht and ImageMagick convert installed, two HTML 
     versions are generated, and 
    a doc/index.html file is created with links to the generated docs. 
 
  3) The "orrdereddict" Python module will be built from its C language 
  source files, in ext/ordereddict-0.4.5. This is not essential - a 
  Python implementation will be used by cylc if necessary. Currently, 
  if the build is successful you must install the module yourself into 
  your $PYTHONPATH. 
 
You may want to maintain successive versions of cylc under the same top 
level directory: 
    TOP/cylc-5.1.0/ 
    TOP/cylc-5.2.3. 
    # etc. 
 
INSTALLING A GIT REPOSITORY CLONE: 
 
  1) To get a clone that can track the official repository: 
 
     % git clone git://github.com/cylc/cylc.git 
     % cd cylc 
     % make  # build orderreddict and documentation (as above) 
     % #... 
     % git pull origin master # update latest changes 
     % make # remake documentation in case of changes 
 
  2) To participate in cylc development: fork cylc on github, clone your 
  own fork locally, commit changes in a feature branch and then push it 
  to your fork and issue a pull request to the cylc maintainer. 
 
In a cylc repository you can re-run make at will to regenerate the 
documentation after making changes or updating the repository. Inside 
the doc directory you can rebuild specific formats of the User Guide 
using special make targets "pdf", "html", "html-single", and 
"html-multi".

H Cylc Development History

 H.1 Pre-3.0
 H.2 Version 3.0
 H.3 Version 4.0
 H.4 Version 5.0

H.1 Pre-3.0

Early versions of cylc were focused on developing and testing the new scheduling algorithm, and the suite design interface at the time was essentially the quickest route to that end. A suite was a collection of “task definition files” that encoded the prerequisites and outputs of each task in a direct reflection of cylc’s internal task proxies. This way of defining suites exposed cylc’s self-organising nature to the user, and it did have some nice properties. For instance a group of tasks could be transferred directly from one suite to another by simply copying the taskdef files over (and checking that prerequisite and output messages were consistent with the new suite). However, ensuring consistency of prerequisites and outputs across a large suite could be tedious; a few edge cases associated with suite start-up and forecast model restart dependencies were, arguably, difficult to understand; and the global structure of a suite was not readily apparent until run time (although to counter this cylc 2.x could generate run-time resolved dependency graphs very quickly in simulation mode).

H.2 Version 3.0

Version 3.0 implemented an entirely new suite design interface in which one defines the suite dependency graph, execution environment, and command scripting for each task, in a single structured, validated, configuration file - the suite.rc file. This makes suite structure apparent at a glance, and much important detail is now implicitly implied by the graph.

H.3 Version 4.0

Version 4.0 has the following major improvements over cylc-3.x, along with many refinements:

H.4 Version 5.0

Version 5.0 contains some major internal changes to enhance performance for large suites, such as multi-threading for continuous request handling and task job submission. We also aim to provide backward compatibility for suite definitions from version 5.0 onward, wherever possible.

I Pyro

Pyro (Python Remote Objects) is a widely used open source objected oriented Remote Procedure Call technology developed by Irmen de Jong.

Earlier versions of cylc used the Pyro Nameserver to marshal communication between client programs (tasks, commands, viewers, etc.) and their target suites. This worked well, but in principle it provided a route for one suite or user on the subnet to bring down all running suites by killing the nameserver. Consequently cylc now uses Pyro simply as a lightweight object oriented wrapper for direct network socket communication between client programs and their target suites - all suites are thus entirely isolated from one another.

J GNU GENERAL PUBLIC LICENSE v3.0

Copyright  2007 Free Software Foundation, Inc. http://fsf.org/

Everyone is permitted to copy and distribute verbatim copies of this

license document, but changing it is not allowed.

Preamble

The GNU General Public License is a free, copyleft license for software and other kinds of works.

The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program–to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.

To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.

For the developers’ and authors’ protection, the GPL clearly explains that there is no warranty for this free software. For both users’ and authors’ sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.

Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users’ freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.

Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.

The precise terms and conditions for copying, distribution and modification follow.

Terms and Conditions

  1. Definitions.

    “This License” refers to version 3 of the GNU General Public License.

    “Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.

    “The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.

    To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.

    A “covered work” means either the unmodified Program or a work based on the Program.

    To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.

    To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.

    An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.

  2. Source Code.

    The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.

    A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.

    The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.

    The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work’s System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.

    The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.

    The Corresponding Source for a work in source code form is that same work.

  3. Basic Permissions.

    All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.

    You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.

    Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.

  4. Protecting Users’ Legal Rights From Anti-Circumvention Law.

    No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.

    When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work’s users, your or third parties’ legal rights to forbid circumvention of technological measures.

  5. Conveying Verbatim Copies.

    You may convey verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.

    You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.

  6. Conveying Modified Source Versions.

    You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:

    1. The work must carry prominent notices stating that you modified it, and giving a relevant date.
    2. The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
    3. You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
    4. If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.

    A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation’s users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.

  7. Conveying Non-Source Forms.

    You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:

    1. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
    2. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
    3. Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
    4. Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
    5. Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.

    A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.

    A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.

    “Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.

    If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).

    The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.

    Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.

  8. Additional Terms.

    “Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.

    When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.

    Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:

    1. Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
    2. Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
    3. Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
    4. Limiting the use for publicity purposes of names of licensors or authors of the material; or
    5. Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
    6. Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.

    All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.

    If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.

    Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.

  9. Termination.

    You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).

    However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

    Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

    Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.

  10. Acceptance Not Required for Having Copies.

    You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.

  11. Automatic Licensing of Downstream Recipients.

    Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.

    An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party’s predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.

    You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.

  12. Patents.

    A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor’s “contributor version”.

    A contributor’s “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.

    Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor’s essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.

    In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.

    If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient’s use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.

    If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.

    A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.

    Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.

  13. No Surrender of Others’ Freedom.

    If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.

  14. Use with the GNU Affero General Public License.

    Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.

  15. Revised Versions of this License.

    The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

    Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.

    If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Program.

    Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.

  16. Disclaimer of Warranty.

    THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  17. Limitation of Liability.

    IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

  18. Interpretation of Sections 15 and 16.

    If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.

    End of Terms and Conditions

    How to Apply These Terms to Your New Programs

    If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.

    To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>  
     
    Copyright (C) <textyear>  <name of author>  
     
    This program is free software: you can redistribute it and/or modify  
    it under the terms of the GNU General Public License as published by  
    the Free Software Foundation, either version 3 of the License, or  
    (at your option) any later version.  
     
    This program is distributed in the hope that it will be useful,  
    but WITHOUT ANY WARRANTY; without even the implied warranty of  
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the  
    GNU General Public License for more details.  
     
    You should have received a copy of the GNU General Public License  
    along with this program.  If not, see <http://www.gnu.org/licenses/>.

    Also add information on how to contact you by electronic and paper mail.

    If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>  
     
    This program comes with ABSOLUTELY NO WARRANTY; for details type ‘show w'.  
    This is free software, and you are welcome to redistribute it  
    under certain conditions; type ‘show c' for details.

    The hypothetical commands show w and show c should show the appropriate parts of the General Public License. Of course, your program’s commands might be different; for a GUI interface, you would use an “about box”.

    You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see http://www.gnu.org/licenses/.

    The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read http://www.gnu.org/philosophy/why-not-lgpl.html.

1Future plans for EcoConnect include additional deterministic regional weather forecasts and a statistical ensemble.

2In fact this dependency negotiation goes through a broker object (rather than every task literally checking every other task) which scales as n (rather than n2) where n is the number of task proxies in the pool.

3Sections are closed by the next section heading, so items within a section must be defined before any subsequent subsection headings.

4The exceptions were designed to allow tasks to override environment variables defined in include-files that could be included in multiple tasks, to assist in factoring out common task configuration. However, namespace inheritance now provides a better way to do this in most cases.

5In NWP forecast analysis suites parts of the observation processing and data assimilation subsystem will typically also depend on model background fields generated by the previous forecast.

6An OR operator on the right doesn’t make much sense: if “B or C” triggers off A, what exactly should cylc do when A finishes?

7A warm cycling model that only writes out one set of restart files, for the very next cycle, does not need to be declared sequential because this early triggering problem cannot arise.

8Note that $CYLC_SUITE_ENVIRONMENT is a string containing embedded newline characters and it has to be handled accordingly. In the bash shell, for instance, it should be echoed in quotes to avoid concatenation to a single line.

9The cylc submit command runs a single task exactly as its suite would, in terms of both job submission method and execution environment.

10If you copy a suite using cylc commands or db viewer the entire suite definition directory will be copied.

11Spawning any earlier than this brings no advantage in terms of functional parallelism and would cause uncontrolled proliferation of waiting tasks.

12This is because you don’t want Model[T] waiting around to trigger off Model[T-12] if Model[T-6] has not finished yet. If Model is forced to be sequential this can’t happen because Model[T] won’t exist in the suite until Model[T-6] has finished. But if Model[T-6] fails, it can be spawned-and-removed from the suite so that Model[T] can then trigger off Model[T-12], which is the correct behaviour.